server memory and persistent memory population rules for ...€¦ · hpe servers based on intel...

37
Check if the document is available in the language of your choice. SERVER MEMORY AND PERSISTENT MEMORY POPULATION RULES FOR HPE GEN10 SERVERS WITH INTEL XEON SCALABLE PROCESSORS Technical white paper

Upload: others

Post on 29-Jan-2021

12 views

Category:

Documents


0 download

TRANSCRIPT

  • Check if the document is availablein the language of your choice.

    SERVER MEMORY AND PERSISTENT MEMORY POPULATION RULES FOR HPE GEN10 SERVERS WITH INTEL XEON SCALABLE PROCESSORS

    Technical white paper

    https://www.hpe.com/psnow/collection-resources/a00017079ENW

  • Technical white paper

    CONTENTS Introduction ................................................................................................................................................................................................................................................................................................................................. 3 Populating HPE DDR4 DIMMs in HPE Gen10 servers ....................................................................................................................................................................................................................... 3

    Introduction to DIMM slot locations ............................................................................................................................................................................................................................................................... 4 Population guidelines for HPE SmartMemory DIMMs .................................................................................................................................................................................................................. 6 Population guidelines for HPE Persistent Memory .......................................................................................................................................................................................................................... 8

    Memory interleaving ........................................................................................................................................................................................................................................................................................................... 8 Rank interleaving ............................................................................................................................................................................................................................................................................................................. 8 Channel interleaving ..................................................................................................................................................................................................................................................................................................... 8 Memory controller interleaving .......................................................................................................................................................................................................................................................................... 8 Node interleaving ............................................................................................................................................................................................................................................................................................................ 8 Disabling memory interleaving ........................................................................................................................................................................................................................................................................... 8

    Understanding balanced DIMM configurations ......................................................................................................................................................................................................................................... 9 Memory configurations that are unbalanced across channels .............................................................................................................................................................................................. 9 Memory configurations that are unbalanced across processors ...................................................................................................................................................................................... 10

    Memory RAS mode and population requirements ............................................................................................................................................................................................................................... 12 Conclusion ................................................................................................................................................................................................................................................................................................................................. 12 Appendix A—HPE Gen10 DIMM slot locations ..................................................................................................................................................................................................................................... 12

    DIMM slot locations in HPE ProLiant DL360/DL380/DL560/DL580/XL270d Gen10 servers ........................................................................................................ 12 DIMM slot locations in HPE ProLiant ML350 Gen10 servers ............................................................................................................................................................................................ 13 DIMM slot locations in HPE Synergy 480 Gen10 compute modules .......................................................................................................................................................................... 14 DIMM slot locations in HPE Synergy 660 Gen10 compute modules .......................................................................................................................................................................... 15 DIMM slot locations in HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/ XL230k/XL450 Gen10 servers ................................................................................................................................................................................................................................................................................................................. 16 DIMM slot locations in HPE Apollo 4200 Gen10 servers (XL420 Gen10) ........................................................................................................................................................... 17 DIMM slot locations in HPE ProLiant ML110 Gen10 servers ............................................................................................................................................................................................ 17

    Appendix B—Population guidelines for HPE SmartMemory DIMMs ................................................................................................................................................................................. 18 Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant DL360/DL380/DL560/DL580/ ML350/XL270d Gen10 servers ................................................................................................................................................................................................................................................................................................................. 18

    Population guidelines for HPE SmartMemory DIMMs in HPE Synergy 480 Gen10 compute modules ............................................................................................ 18 Population guidelines for HPE SmartMemory DIMMs in HPE Synergy 660 Gen10 compute modules ..................................................................................... 19 Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant BL460c Gen10 server blades and XL170r/XL190r/XL230k/XL450 Gen10 servers .......................................................................................................................................................................................................................... 19 Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant ML110 Gen10 servers ....................................................................................................... 20 Mixed HPE SmartMemory DIMM configurations ............................................................................................................................................................................................................................ 20

    Appendix C—Population guidelines for HPE NVDIMM-Ns ......................................................................................................................................................................................................... 21 Population guidelines for HPE NVDIMM-Ns in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 servers .................................................................. 21 Population guidelines for HPE NVDIMM-Ns in HPE ProLiant BL460c Gen10 server blades .............................................................................................................. 24 Population guidelines for HPE NVDIMM-Ns in HPE Synergy 480 and 660 Gen10 compute modules ..................................................................................... 25

    Appendix D—Population guidelines for HPE Persistent Memory ......................................................................................................................................................................................... 30 Appendix E—Balanced population rules versus RAS population rules ............................................................................................................................................................................ 37

  • Technical white paper Page 3

    INTRODUCTION This paper describes how to populate HPE DDR4 SmartMemory DIMMs and HPE Persistent Memory in HPE ProLiant Gen10 servers and Synergy Gen10 compute modules using Intel® Xeon® Scalable processors. HPE server memory for Gen10 servers support faster data rates, lower latencies, and greater power efficiency than the DIMMs used in previous generations of HPE servers. HPE SmartMemory also provides superior performance over third-party memory when used in HPE servers.

    HPE Gen10 servers with Intel Xeon processors offer the same number of DIMM slots as HPE Gen9 servers, but the central processing unit (CPU) architecture has changed:

    • Gen10: Six memory channels per CPU with up to two DIMM slots per channel (12 DIMM slots per channel).

    • Gen9: Four memory channels per CPU with up to three DIMM slots per channel (12 DIMM slots per channel).

    This improves nominal bandwidth by 50% if all channels are used. In conjunction with increasing the memory speed from 2400 MT/s to 2933 MT/s, this improves nominal bandwidth by 81% (based on internal HPE performance benchmark testing, March 2019).

    In addition to describing these improvements, this white paper reviews the rules, best practices, and optimization strategies that should be used when installing HPE DDR4 DIMMs in HPE Gen10 servers.

    POPULATING HPE DDR4 DIMMS IN HPE GEN10 SERVERS HPE Gen10 systems support a variety of flexible memory configurations, enabling the system to be configured and run in any valid memory controller configuration. For optimal performance and functionality, you should follow these rules when populating HPE Gen10 servers with HPE DDR4 DIMMs. Violating these rules may result in reduced memory capacity, performance, or error messages during boot. Table 1 summarizes the overall population rules for HPE Gen10 servers.

    TABLE 1. DIMM population rules for HPE Gen10 servers

    Category Population guidelines

    Processors and DIMM slots

    Install DIMMs only if the corresponding processor is installed. If only one processor is installed in a two-processor system, only half of the DIMM slots are available to populate.

    If a memory channel consists of more than one DIMM slot, the white DIMM slot is located furthest from the CPU. White DIMM slots denote the first slot to be populated in a channel. For one DIMM per channel (DPC), populate white DIMM slots only.

    When mixing HPE SmartMemory DIMMs of different ranks in the same channel, place the HPE SmartMemory DIMM with the higher number of ranks in the white DIMM slot and the HPE SmartMemory DIMM with the lower number of ranks in the black DIMM slot.

    If multiple CPUs are populated, split the HPE SmartMemory DIMMs evenly across the CPUs and follow the corresponding CPU rules when populating DIMMs.

    Performance To maximize performance, it is recommended to balance the total memory capacity across all installed processors and load the channels similarly whenever possible (see Appendix B).

    If the number of DIMMs does not spread evenly across the CPUs, populate as close to evenly as possible.

    Avoid creating an unbalanced configuration for any CPU.

    DIMM types and capacities

    The maximum memory capacity is a function of the number of DIMM slots on the platform, the largest DIMM capacity qualified on the platform and the number and model of qualified processors installed on the platform.

    Do not mix HPE SmartMemory RDIMMs and HPE SmartMemory LRDIMMs in the same system.

    The 128 GB 8R 3DS LRDIMM cannot be mixed with any other DIMMs. The 128 GB 4R LRDIMM may only be mixed with the 64 GB 4R LRDIMM.

    HPE servers based on Intel Xeon Scalable processors do not support unbuffered DIMMs (UDIMMs).

    HPE SmartMemory DIMMs with x4 and x8 DRAMs can be mixed in the same channel. RAS features affected when mixing x4 and x8 DIMMs are online spare, mirrored memory, and HPE Fast Fault Tolerance.

    DIMM speed The maximum memory speed is a function of the memory type, memory configuration, and processor model.

    DIMMs of different speeds may be mixed in any order; however, the server will select the lowest common speed among all of the DIMMs on all of the CPUs.

    HPE SmartMemory DIMMs and HPE NVDIMM-Ns from previous generation servers are not compatible with the current generation. Certain HPE SmartMemory features such as Memory Authentication and Enhanced Performance may not be supported.

    Heterogeneous mix

    There are no performance implications for mixing sets of different capacity DIMMs at the same operating speed. For example, latency and throughput will not be negatively impacted by installing an equal number of 16 GB dual-rank DDR4-2933 DIMMs (one per channel) and 32 GB dual-rank DDR4-2933 DIMMs (one per channel).

    Take each DIMM type and create a configuration as if it were a homogeneous configuration.

  • Technical white paper Page 4

    Introduction to DIMM slot locations In general, DIMM population order follows the same logic for all HPE Gen10 servers—although physical arrangement may vary from server to server. To populate DIMMs in the correct order and location, refer to illustrations found in Appendix B for HPE SmartMemory DIMMs, Appendix C for HPE NVDIMM-Ns, and Appendix D for HPE Persistent Memory, available in 128, 256, and 512 GB modules, featuring Intel® Optane™ DC Persistent Memory. Each illustration reflects the DIMM slots to use for a given number of DIMMs around a single processor, assuming a common DIMM type.

    If multiple processors are installed, split the DIMMs evenly across the processors and follow the corresponding rule when populating DIMMs for each processor (see Figure 4 for an example). For optimal throughput and reduced latency, populate all six channels of each installed CPU identically.

    The first DIMM slots for each channel have white connectors, and the second DIMM slots, if any, have black connectors.

    Figure 1 shows the DIMM slot configuration for the HPE ProLiant DL380 Gen10 server, which has two sockets and 24 DIMM slots.

    FIGURE 1. 24 DIMM slot locations in HPE ProLiant DL360/DL380/DL560/XL270d/DL580 Gen10 servers

  • Technical white paper Page 5

    Figure 2 shows the DIMM slot configuration in HPE ProLiant BL460c Gen10 server blades, which have two sockets and 16 DIMM slots. The configuration is similar to the HPE ProLiant DL380 Gen10 server, with the main difference being the number of slots on each memory channel. In these servers, one channel on each side of the CPU has two slots attached, while the remaining channels on each side of the CPU have only one slot attached. In the rest of this white paper, this will be referenced as a 2+1+1 configuration. You should populate the memory for these servers following the illustrations found in Appendix B.

    FIGURE 2. 16 DIMM slot locations in HPE ProLiant BL460c Gen10 two-socket server blades

  • Technical white paper Page 6

    Population guidelines for HPE SmartMemory DIMMs This section provides generic guidelines for populating HPE SmartMemory DIMMs in HPE Gen10 servers. See Appendix B for population guidelines for specific HPE Gen10 servers.

    HPE SmartMemory DIMMs and HPE NVDIMM-Ns may be populated in many permutations that are allowed but may not provide optimal performance. The system ROM reports a message during the power on self-test if the population is not supported or is not balanced.

    Table 2 shows the population guidelines for HPE SmartMemory DIMMs in HPE Gen10 servers with twelve DIMM slots per CPU (e.g., HPE ProLiant DL360, DL380, DL560, DL580, XL270d, and ML350 Gen10 servers). For a given number of HPE SmartMemory DIMMs per CPU, populate those DIMMs in the corresponding numbered DIMM slot(s) on the corresponding row.

    TABLE 2. HPE SmartMemory DIMM population guidelines for HPE Gen10 servers with twelve DIMM slots per CPU

    HPE ProLiant Gen10 12 slots per CPU DIMM population order

    1 DIMM 8

    2 DIMMs 8 10

    3 DIMMs 8 10 12

    4 DIMMs 3 5 8 10

    5 DIMMs* 3 5 8 10 12

    6 DIMMs 1 3 5 8 10 12

    7 DIMMs* 1 3 5 7 8 10 12

    8 DIMMs 3 4 5 6 7 8 9 10

    9 DIMMs* 1 3 5 7 8 9 10 11 12

    10 DIMMs* 1 3 4 5 6 7 8 9 10 12

    11 DIMMs* 1 3 4 5 6 7 8 9 10 11 12

    12 DIMMs 1 2 3 4 5 6 7 8 9 10 11 12

    * Unbalanced

    As shown in Table 2, memory should be installed as indicated based upon the total number of DIMMs being installed per CPU. For example:

    • If two HPE SmartMemory DIMMs are being installed per CPU, they should be installed in DIMM slots 8 and 10.

    • If six HPE SmartMemory DIMMs are being installed per CPU, they should be installed in DIMM slots 1, 3, 5, 8, 10, and 12.

    Unbalanced configurations are noted with an asterisk and may not provide optimal performance. This is because memory performance may be inconsistent and reduced compared to balanced configurations. Although the eight-DIMM configuration is balanced, it provides 33% less bandwidth than the six-DIMM configuration because it does not use all channels. Other configurations (such as the 11-DIMM configuration) will provide maximum bandwidth in some address regions and less bandwidth in others. Applications that rely heavily on throughput will be most impacted by an unbalanced configuration. Other applications that rely more on memory capacity and less on throughput will be far less impacted by such a configuration.

  • Technical white paper Page 7

    Table 3 shows the population guidelines for HPE SmartMemory DIMMs in HPE Gen10 servers with eight DIMM slots per CPU (e.g., HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/XL230k/XL450 Gen10 servers).

    TABLE 3. HPE SmartMemory DIMM population guidelines for HPE Gen10 servers with eight DIMM slots per CPU

    HPE ProLiant Gen10 8 slots per CPU DIMM population order

    1 DIMM 3

    2 DIMMs 2 3

    3 DIMMs 1 2 3

    4 DIMMs 2 3 6 7

    5 DIMMs* 1 2 3 6 7

    6 DIMMs 1 2 3 6 7 8

    7 DIMMs* 1 2 3 4 6 7 8

    8 DIMMs* 1 2 3 4 5 6 7 8

    * Unbalanced

    As shown in Table 3, memory should be installed as indicated based upon the total number of DIMMs being installed per CPU. For example:

    • If two HPE SmartMemory DIMMs are being installed, they should be installed in DIMM slots 2 and 3.

    • If six HPE SmartMemory DIMMs are being installed, they should be installed in DIMM slots 1, 2, 3, 6, 7, and 8.

    Unbalanced configurations are noted with an asterisk. In these configurations, memory performance may be inconsistent or reduced compared to a balanced configuration.

    Table 4 shows the population guidelines for HPE SmartMemory DIMMs in HPE Gen10 servers with six DIMM slots per CPU (e.g., HPE ProLiant ML110 Gen10 servers).

    TABLE 4. HPE SmartMemory DIMM population guidelines for HPE Gen10 servers with six DIMM slots per CPU

    HPE ProLiant Gen10 6 slots per CPU DIMM population order

    1 DIMM 4

    2 DIMMs 4 5

    3 DIMMs 4 5 6

    4 DIMMs 2 3 4 5

    5 DIMMs* 2 3 4 5 6

    6 DIMMs 1 2 3 4 5 6

    * Unbalanced

    As shown in Table 4, memory should be installed as indicated based upon the total number of DIMMs being installed per CPU. For example:

    • If two HPE SmartMemory DIMMs are being installed, they should be installed in DIMM slots 4 and 5.

    • If four HPE SmartMemory DIMMs are being installed, they should be installed in DIMM slots 2, 3, 4, and 5.

    Unbalanced configurations are noted with an asterisk. In these configurations, memory performance may be inconsistent or reduced compared to a balanced configuration.

  • Technical white paper Page 8

    Population guidelines for HPE Persistent Memory This section provides generic guidelines for populating HPE Persistent Memory in HPE Gen10 servers. See Appendix C and Appendix D population guidelines for specific HPE Gen10 servers.

    HPE SmartMemory DIMMs and HPE Persistent Memory may be populated in many permutations that are allowed but may not provide optimal performance. The system ROM reports a message during the power on self-test if the population is not supported or is suboptimal.

    Populate HPE SmartMemory DIMMs first, then add HPE Persistent Memory modules in the remaining DIMM slots. Population rules specific to HPE NVDIMM-Ns include:

    • The HPE Persistent Memory population may be different on every processor (this is not optimal for HPE SmartMemory DIMMs).

    • If a channel has both an HPE SmartMemory DIMM and an HPE Persistent Memory module, the HPE SmartMemory DIMM must be in the white slot and the HPE Persistent Memory module must be in the black slot.

    • Since the HPE 16 GiB NVDIMM-N is an RDIMM, it must be mixed only with HPE SmartMemory RDIMMs and must not be mixed with HPE SmartMemory LRDIMMs.

    • HPE Persistent Memory can be paired with either HPE SmartMemory RDIMM or HPE SmartMemory LRDIMM, but mixing RDIMMs and LRDIMMs is not allowed.

    • The use of HPE Persistent Memory disables RAS features like Online Spare, Mirrored Memory, and HPE Fast Fault Tolerance.

    MEMORY INTERLEAVING Memory interleaving is a technique used to maximize memory performance by spreading memory addresses evenly across memory devices. Interleaved memory results in a contiguous memory region across multiple devices with sequential accesses using each memory device in turn, instead of using the same one repeatedly. The result is higher memory throughput due to the reduced wait times for memory banks to become available for desired operations between reads and writes.

    Memory interleaving techniques include:

    Rank interleaving This technique interleaves across ranks within a memory channel. When configured correctly, sequential reads within the channel will be interleaved across ranks. This enhances channel throughput by increasing utilization on the channel. Rank interleaving is a lower priority than channel interleaving when creating an interleave region and a 1-DPC region across three channels will be higher priority than a two-DIMM region within a channel.

    Channel interleaving This technique interleaves across memory channels. When configured correctly, sequential reads will be interleaved across memory channels. Channel bandwidth will be accumulated across the interleaved channels. The Unified Extensible Firmware Interface System utilities user guide for HPE ProLiant Gen10 servers and HPE Synergy servers goes into detail regarding setting up memory for interleaving.

    Memory controller interleaving Intel Xeon Scalable processors have two memory controllers per CPU, and the channels selected for channel interleaving are based on matching channels in the memory controllers.

    Node interleaving This technique interleaves across sockets and is not optimal for modern software and operating systems that understand non-uniform memory access (NUMA) system architectures. Node interleaving is not supported while HPE NVDIMM-Ns are present. Non-NUMA operating environments, however, may see improved performance by enabling node interleaving.

    Disabling memory interleaving This option is available from the Advanced Power Management menu in the RBSU Advanced Options menu or the REST API. Disabling memory interleaving not only saves some power per DIMM but also decreases overall memory system performance.

    https://support.hpe.com/hpesc/public/docDisplay?docId=a00016407en_us&docLocale=en_UShttps://support.hpe.com/hpesc/public/docDisplay?docId=a00016407en_us&docLocale=en_US

  • Technical white paper Page 9

    UNDERSTANDING BALANCED DIMM CONFIGURATIONS Optimal memory performance is achieved when the system is configured with a fully homogeneous and balanced DIMM configuration. Unbalanced DIMM configurations are those in which the installed memory is not distributed evenly across the memory channels. Hewlett Packard Enterprise discourages unbalanced configurations because they will always have lower performance than similar balanced configurations. There are two types of unbalanced configurations, each with its own performance implications.

    • Unbalanced across channels within a CPU: The memory installed on each populated channel is not identical. This is undesirable for:

    – HPE SmartMemory DIMMs

    – HPE NVDIMM-Ns with interleaving enabled (but is acceptable for HPE NVDIMM-Ns with interleaving disabled)

    • Unbalanced across processors: A different amount of memory is installed on each of the processors. This is usually undesirable for HPE SmartMemory DIMMs, but is often acceptable for HPE NVDIMM-Ns.

    Memory configurations that are unbalanced across channels In unbalanced memory configurations across channels, the memory controller will split memory into regions, as shown in Figure 3. In a balanced configuration, there will be one region that includes all installed DIMMs. If the memory configuration is unbalanced, it will attempt to create multiple balanced regions. First, it will create the largest possible balanced region with the installed memory. The next largest region comes next, and so on. In this manner, the memory controller will create regions until all installed memory has been assigned to a region. The performance of these regions differs, resulting in erratic performance.

    FIGURE 3. Examples of balanced and unbalanced configurations

    In Figure 3, the illustration on the left depicts a balanced configuration, since each of the populated memory channel contains the same number of DIMMs (one each). Conversely, the image on the right is unbalanced because the DIMM in DIMM slot five creates a second memory region.

  • Technical white paper Page 10

    The primary effect of memory configurations that are unbalanced across channels is a reduction in the number of channels that can be interleaved. Interleaving fewer channels results in a decrease in memory throughput in those regions that span fewer memory channels. In the unbalanced example in Table 5, worst-case measured memory throughput in Region 2 would be 33% or less than the throughput in the balanced example. Even in Region 1 in the unbalanced picture, throughput would be limited to no more than 66% of what the single region in the balanced example could provide. See Table 5 for more details.

    TABLE 5. Impact of unbalanced configurations on memory throughput

    Number of interleaved channels per processor Throughput compared to peak

    DIMMs Largest Group Smallest Group Largest Group Smallest Group

    1 1 1 16.67% 16.67%

    2 2 2 33.33% 33.33%

    3 3 3 50.00% 50.00%

    4 4 4 66.67% 66.67%

    5* 4 1 66.67% 16.67%

    6 6 6 100.00% 100.00%

    7* 6 1 100.00% 16.67%

    8 4 4 66.67% 66.67%

    9* 6 3 100.00% 50.00%

    10* 6 4 100.00% 66.67%

    11* 6 1 100.00% 16.67%

    12 6 6 100.00% 100.00%

    * Unbalanced

    Unbalanced configurations are tagged with an asterisk. In these cases, there will be multiple interleave regions of different sizes. Each region will exhibit different performance characteristics. When running a benchmark sensitive to throughput (such as STREAM), the benchmark program could measure the throughput of the largest interleave group, the smallest group or somewhere in between.

    Table 5 shows all possible DIMM configurations for a single CPU. The “Largest Group” column shows the number of channels in the largest interleave group possible for that configuration. The “Smallest Group” column shows the number of channels in the smallest interleave group. The performance comparison in the last two columns is relative to a full population where all channels are interleaved together.

    Memory configurations that are unbalanced across processors Figure 4 shows a memory configuration that is unbalanced across processors. The CPU 1 threads operating on the larger memory capacity of CPU 1 may have adequate local memory with relatively low latencies and high throughput. The CPU 2 threads operating on the smaller memory capacity of CPU 2 may consume all available memory on CPU 2 and request remote memory from CPU 1. The longer latencies and limited throughput of cross-CPU communications associated with the remote memory will result in reduced performance of those threads. In practice, this may result in non-uniform performance characteristics for software program threads, depending on which processor executes them.

  • Technical white paper Page 11

    FIGURE 4. Example of memory that is unbalanced across processors

    Figure 4 shows an example of unbalanced memory configurations across processors. In this example, the first processor contains four DIMMs, while the second CPU has eight DIMMs installed.

    FIGURE 5. Example of a memory configuration that is balanced across processors

    Figure 5 shows an example of a configuration that is balanced across processors. In this example, both processors have six DIMMs installed.

  • Technical white paper Page 12

    MEMORY RAS MODE AND POPULATION REQUIREMENTS HPE Gen10 servers using Intel Xeon Scalable processors support four different memory reliability, accessibility, and serviceability (RAS) modes. If you plan to enable any of these advanced RAS modes, please see the HPE Server Memory RAS white paper for more specific information regarding memory configuration and population rules. The RAS modes supported include:

    • Advanced error correction code

    • Online spare

    • Mirrored memory

    • HPE Fast Fault Tolerance

    The rules on channel DIMM population and channel DIMM matching vary by the RAS mode used. However, regardless of RAS mode, the requirements for DIMM population within a system and a channel must be met at all times.

    For RAS modes that require matching DIMM populations, the same DIMM slot positions across channels must hold the same DIMM type with regard to size and organization. DIMM timings do not have to match, but timings will be set to support all DIMMs populated (that is, DIMMs with slower timings will force faster DIMMs to the slower common timing modes).

    CONCLUSION Following the population guidelines maximizes memory performance of HPE SmartMemory DIMMs and HPE NVDIMM-Ns in HPE Gen10 servers with Intel Xeon Scalable processors.

    APPENDIX A—HPE GEN10 DIMM SLOT LOCATIONS This section illustrates the physical location of the DIMM slots for HPE Gen10 servers using Intel Xeon Scalable processors. HPE servers support twelve, eight, or six DIMM slots per CPU.

    DIMM slot locations in HPE ProLiant DL360/DL380/DL560/DL580/XL270d Gen10 servers HPE ProLiant DL360, DL380, DL560, and DL580/XL270d Gen10 servers have twelve DIMM slots per CPU.

    FIGURE 6. DIMM slot locations in HPE ProLiant DL360/DL380/DL560/DL580/XL270d Gen10 servers

    https://www.hpe.com/psnow/doc/4aa4-3490enw

  • Technical white paper Page 13

    DIMM slot locations in HPE ProLiant ML350 Gen10 servers HPE ProLiant ML350 Gen10 servers have twelve DIMM slots per CPU.

    FIGURE 7. DIMM slot locations in HPE ProLiant ML350 Gen10 servers

  • Technical white paper Page 14

    DIMM slot locations in HPE Synergy 480 Gen10 compute modules HPE Synergy 480 Gen10 compute modules have twelve DIMM slots per CPU.

    FIGURE 8. DIMM slot locations for HPE Synergy 480 Gen10 compute modules

  • Technical white paper Page 15

    DIMM slot locations in HPE Synergy 660 Gen10 compute modules HPE Synergy 660 Gen10 compute modules have twelve DIMM slots per CPU.

    FIGURE 9. DIMM slot locations for HPE Synergy 660 Gen10 compute modules

  • Technical white paper Page 16

    DIMM slot locations in HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/ XL230k/XL450 Gen10 servers HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/XL230k/XL450 Gen10 servers have eight DIMM slots per CPU. Six channels have one DIMM slot, and two channels have two DIMM slots.

    FIGURE 10. DIMM slot locations in HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/XL230k/XL450 Gen10 servers

  • Technical white paper Page 17

    DIMM slot locations in HPE Apollo 4200 Gen10 servers (XL420 Gen10) HPE Apollo 4200 Gen10 servers have eight DIMM slots per CPU. Six channels have one DIMM slot, and two channels have two DIMM slots.

    FIGURE 11. DIMM slot locations in HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/XL230k/XL450 Gen10 servers

    DIMM slot locations in HPE ProLiant ML110 Gen10 servers HPE ProLiant ML110 Gen10 servers have six DIMM slots per CPU.

    FIGURE 12. DIMM slot locations in HPE ProLiant ML110 Gen10 servers

  • Technical white paper Page 18

    APPENDIX B—POPULATION GUIDELINES FOR HPE SMARTMEMORY DIMMS This section illustrates which DIMM slots to use when populating memory in HPE Gen10 servers using Intel Xeon Scalable processors. Each illustration reflects the DIMM slots to use for a given number of DIMMs around a single processor, given a common DIMM type. If multiple processors are installed, split the DIMMs evenly across the processors and follow the corresponding rule when populating DIMMs for each processor. Tables 6 to 10 represent the bootstrap processor and the population shown will ensure that the first DIMM populated is in the right place. Unbalanced configurations are noted with an asterisk. In these configurations, memory performance may be inconsistent or reduced compared to a balanced configuration.

    In cases of a heterogeneous mix, take each DIMM type and create a configuration as if it were a homogeneous configuration. Depending on the per-channel rules, populate the DIMMs with highest rank count in white DIMM slots in each channel, then populate the other DIMMs in the black DIMM slots in each channel. See the last illustration for an example of a popular mix.

    Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant DL360/DL380/DL560/DL580/ ML350/XL270d Gen10 servers HPE ProLiant DL360, DL380, DL560, DL580, ML350, and XL270d Gen10 servers have twelve DIMM slots per CPU. TABLE 6. Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant DL360/DL380/DL560/DL580/ML350/XL270d Gen10 servers

    DIMM population order for CPU 1/CPU 2

    1 DIMM 8

    2 DIMMs 8 10

    3 DIMMs 8 10 12

    4 DIMMs 3 5 8 10

    5 DIMMs* 3 5 8 10 12

    6 DIMMs 1 3 5 8 10 12

    7 DIMMs* 1 3 5 7 8 10 12

    8 DIMMs 3 4 5 6 7 8 9 10

    9 DIMMs* 1 3 5 7 8 9 10 11 12

    10 DIMMs* 1 3 4 5 6 7 8 9 10 12

    11 DIMMs* 1 3 4 5 6 7 8 9 10 11 12

    12 DIMMs 1 2 3 4 5 6 7 8 9 10 11 12

    * Unbalanced

    POPULATION GUIDELINES FOR HPE SMARTMEMORY DIMMS IN HPE SYNERGY 480 GEN10 COMPUTE MODULES HPE Synergy 480 Gen10 compute modules have twelve DIMM slots per CPU.

    TABLE 7. Population guidelines for HPE SmartMemory DIMMs in HPE Synergy 480 Gen10 compute modules

    DIMM population order for

    CPU 1 CPU 2

    1 DIMM 8 5

    2 DIMMs 8 10 3 5

    3 DIMMs 8 10 12 1 3 5

    4 DIMMs 3 5 8 10 3 5 8 10

    5 DIMMs* 3 5 8 10 12 1 3 5 8 10

    6 DIMMs 1 3 5 8 10 12 1 3 5 8 10 12

    7 DIMMs* 1 3 5 7 8 10 12 1 3 5 6 8 10 12

    8 DIMMs 3 4 5 6 7 8 9 10 3 4 5 6 7 8 9 10

    9 DIMMs* 1 3 5 7 8 9 10 11 12 1 2 3 4 5 6 8 10 12

    10 DIMMs* 1 3 4 5 6 7 8 9 10 12 1 3 4 5 6 7 8 9 10 12

    11 DIMMs* 1 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 12

    12 DIMMs 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12

    * Unbalanced

  • Technical white paper Page 19

    Population guidelines for HPE SmartMemory DIMMs in HPE Synergy 660 Gen10 compute modules HPE Synergy 660 Gen10 compute modules have twelve DIMM slots per CPU.

    TABLE 8. Population guidelines for HPE SmartMemory DIMMs in HPE Synergy 660 Gen10 compute modules

    DIMM population order for

    CPU 1/CPU 2 CPU 3/CPU 4

    1 DIMM 8 5

    2 DIMMs 8 10 3 5

    3 DIMMs 8 10 12 1 3 5

    4 DIMMs 3 5 8 10 3 5 8 10

    5 DIMMs* 3 5 8 10 12 1 3 5 8 10

    6 DIMMs 1 3 5 8 10 12 1 3 5 8 10 12

    7 DIMMs* 1 3 5 7 8 10 12 1 3 5 6 8 10 12

    8 DIMMs 3 4 5 6 7 8 9 10 3 4 5 6 7 8 9 10

    9 DIMMs* 1 3 5 7 8 9 10 11 12 1 2 3 4 5 6 8 10 12

    10 DIMMs* 1 3 4 5 6 7 8 9 10 12 1 3 4 5 6 7 8 9 10 12

    11 DIMMs* 1 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 12

    12 DIMMs 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12

    * Unbalanced

    Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant BL460c Gen10 server blades and XL170r/XL190r/XL230k/XL450 Gen10 servers HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/XL230k/XL450 Gen10 server have eight DIMM slots per CPU.

    On these platforms, for maximum throughput, the recommended configuration is six DIMMs per CPU. Eight DIMMs per CPU while maximizing memory capacity results in an unbalanced configuration, which will reduce performance.

    TABLE 9. Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL230k/XL450 Gen10 servers

    DIMM population order per CPU—BL460c and XL230k/XL450 Gen10 Servers

    1 DIMM 3

    2 DIMMs 2 3

    3 DIMMs 1 2 3

    4 DIMMs 2 3 6 7

    5 DIMMs* 1 2 3 6 7

    6 DIMMs 1 2 3 6 7 8

    7 DIMMs* 1 2 3 4 6 7 8

    8 DIMMs* 1 2 3 4 5 6 7 8

    * Unbalanced

  • Technical white paper Page 20

    TABLE 10. Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant XL170r/XL190r Gen10 servers

    DIMM population order per CPU—XL170r/XL190r Gen10 Servers

    1 DIMM 6

    2 DIMMs 6 7

    3 DIMMs 6 7 8

    4 DIMMs 2 3 6 7

    5 DIMMs* 2 3 6 7 8

    6 DIMMs 1 2 3 6 7 8

    7 DIMMs* 1 2 3 5 6 7 8

    8 DIMMs* 1 2 3 4 5 6 7 8

    * Unbalanced

    Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant ML110 Gen10 servers HPE ProLiant ML110 Gen10 servers have six DIMM slots per CPU.

    TABLE 11. Population guidelines for HPE SmartMemory DIMMs in HPE ProLiant ML110 Gen10 servers

    1 DIMM 4

    2 DIMMs 4 5

    3 DIMMs 4 5

    4 DIMMs 2 3 4 5

    5 DIMMs* 2 3 4 5

    6 DIMMs 1 2 3 4 5

    * Unbalanced

    Mixed HPE SmartMemory DIMM configurations In cases of a heterogeneous mix, take each DIMM type and create a configuration as though it were a homogeneous configuration. Depending on the per-channel rules, populate the DIMMs with highest rank count in white DIMM slots in each channel, then populate the other DIMMs in the black DIMM slots in each channel as shown in the following illustration.

    FIGURE 13. Mixing HPE SmartMemory 32 GB and 16 GB DIMMs

  • Technical white paper Page 21

    APPENDIX C—POPULATION GUIDELINES FOR HPE NVDIMM-NS HPE NVDIMM-Ns may be included alongside HPE SmartMemory DIMMs in select HPE Gen10 servers.

    Population guidelines for HPE NVDIMM-Ns in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 servers Since HPE NVDIMM-Ns are designed to unleash maximum system performance, systems using HPE NVDIMM-Ns should have one HPE SmartMemory DPC to provide maximum memory performance (i.e., six HPE SmartMemory DIMMs per CPU). One to six HPE NVDIMM-Ns per processor (based on persistent memory capacity and performance needs) should then be added as follows:

    TABLE 12. Population guidelines for HPE NVDIMM-Ns with six HPE SmartMemory DIMMs attached to CPU 1/CPU 3 in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 servers

    CPU 1/CPU 31

    Persistent memory capacity2 (in GiB)

    Nominal interleaved persistent memory

    bandwidth3 (in GB/s)

    Memory controller 2 Memory controller 1

    Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    HPE NVDIMM-N count

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    0 R R R R R R 0 0

    1 R N R R R R R 16 21.33

    2 R N R N R R R R 32 42.67

    3 R N R N R N R R R 48 64

    4 R N R N R R N R N R 64 85.33

    54 R N R N R N R N R N R 80 Inconsistent

    6 R N R N R N N R N R N R 96 128

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 CPU 3 is only available on HPE ProLiant DL560 and HPE ProLiant DL580 Gen10 servers. 2 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 3 Nominal persistent memory bandwidth assumes that the system is running at 2667 MT/s and NVDIMM-N interleaving is enabled, and that no bandwidth is being used for the HPE SmartMemory DIMMs.

    4 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration.

    The population guidelines described in Table 13 for CPU 2/CPU 4 are mirror image of CPU 1/CPU 3 in Table 12.

    TABLE 13. Population guidelines for HPE NVDIMM-Ns with six HPE SmartMemory DIMMs attached to CPU 2/CPU 4 in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 servers

    CPU 2/CPU 41 Persistent memory capacity2 (in GiB)

    Nominal interleaved persistent memory

    bandwidth3 (in GB/s)

    Memory controller 1 Memory controller 2 Channel 3 Channel 2 Channel 1 Channel 4 Channel 5 Channel 6

    HPE NVDIMM-N count

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    0 R R R R R R 0 0

    1 R R R R R N R 16 21.33

    2 R R R R N R N R 32 42.67

    3 R R R N R N R N R 48 64

    4 R N R N R R N R N R 64 85.33

    54 R N R N R N R N R N R 80 Inconsistent

    6 R N R N R N N R N R N R 96 128

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 CPU 4 is only available on HPE ProLiant DL560 and HPE ProLiant DL580 Gen10 servers. 2 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 3 Nominal persistent memory bandwidth assumes that the system is running at 2667 MT/s and NVDIMM-N interleaving is enabled, and that no bandwidth is being

    used for the HPE SmartMemory DIMMs. 4 Non-interleaved—NVDIMM-N interleaving should be disabled if five NVDIMM-Ns are used.

  • Technical white paper Page 22

    Although Hewlett Packard Enterprise recommends that HPE NVDIMM-Ns be used with six HPE SmartMemory DIMMs per processor, they may also be combined with smaller quantities of HPE SmartMemory DIMMs if regular memory performance is not so important or if larger persistent memory capacity is needed.

    The recommended configurations below assume that all HPE SmartMemory DIMMs are identical (no mix of ranks).

    TABLE 14. Extended population guidelines for HPE NVDIMM-Ns in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 servers (part 1)

    CPU 1/CPU 2

    Persistent memory

    capacity1 (in GiB)

    Memory controller 2 Memory controller 1 Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    Regular DIMM + HPE NVDIMM-N count

    NI only2

    DIMM

    Slot 1

    DIMM

    Slot 2

    DIMM

    Slot 3

    DIMM

    Slot 4

    DIMM

    Slot 5

    DIMM

    Slot 6

    DIMM

    Slot 7

    DIMM

    Slot 8

    DIMM

    Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    1+0 R 0

    1+1 N R 16

    1+2 N N R 32

    1+3 N N N R 48

    1+4 N N R N N 64

    1+5 N N N R N N 80

    1+6 N N N N R N N 96

    1+7 N N N N N R N N 112

    1+8 N N N N N N R N N 128

    1+9 N N N N N N R N N N 144

    1+10 N N N N N N R N N N N 160

    1+11 N N N N N N N R N N N N 176

    2+0 R R 0

    2+1 N R R 16

    2+2 N N R R 32

    2+3 N N N R R 48

    2+43 N N R N R N

    64 N N N R R N

    2+5 N N N N R R N 80

    2+63 N N N N R N R N

    96 N N N N N N R R

    2+7 N N N N N N R R N 112

    2+8 N N N N N N R R N N 128

    2+9 N N N N N N R N R N N 144

    2+10 N N N N N N N R N R N N 160

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration. 3 Two configurations are recommended for this number of NVDIMM-Ns: one that works with NVDIMM-N interleaving either enabled or disabled, the other intended for NVDIMM-N

    interleaving to be disabled.

  • Technical white paper Page 23

    TABLE 15. Extended population guidelines for HPE NVDIMM-Ns in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 servers (part 2)

    CPU 1/CPU 2

    Persistent memory

    capacity1 (in GiB)

    Memory controller 2 Memory controller 1

    Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    Regular DIMM + HPE NVDIMM-N

    count

    NI only2

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    3+0 R R R 0

    3+1 N R R R 16

    3+2 N N R R R 32

    3+3 N N N R R R 48

    3+43 N N R N R N R 64 N N N N R R R

    3+5 N N N N N R R R 80

    3+6 N N N N R N R N R 96

    3+7 N N N N N N R R N R 112

    3+8 N N N N N N R N R N R 128

    3+9 N N N N N N N R N R N R 144

    4+0 R R R R 0

    4+1 N R R R R 16

    4+2 N R R R R N 32

    4+33 N R N R N R R 48 N N R R R R N

    4+43 N R N R R N R N 64 N N R R R R N N

    4+53 N N R N R R R N N 80

    4+63 N R N R N N R N R N 96 N N R N R N R R N N

    4+7 N N R N R N R N R N N 112

    4+8 N N R N R N N R N R N N 128

    6+0 R R R R R R 0

    6+1 R N R R R R R 16

    6+2 R N R N R R R R 32

    6+3 R N R N R N R R R 48

    6+4 R N R N R R N R N R 64

    6+5 R N R N R N R N R N R 80

    6+6 R N R N R N N R N R N R 96

    8+04 R R R R R R R R 0

    8+14 R R R R R R R R N 16

    8+24 N R R R R R R R R N 32

    8+34 N R R R R R R R R N N 48

    8+44 N N R R R R R R R R N N 64

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration. 3 Two configurations are recommended for this number of NVDIMM-Ns: one that works with NVDIMM-N interleaving either enabled or disabled, the other intended for

    NVDIMM-N interleaving to be disabled. 4 Configurations with eight regular DIMMs use fewer channels and provide lower bandwidth for regular memory than the configurations with six regular DIMMs.

  • Technical white paper Page 24

    Population guidelines for HPE NVDIMM-Ns in HPE ProLiant BL460c Gen10 server blades In the HPE ProLiant BL460c Gen10 server blade, only some of the DIMM slots are connected to the Smart Storage battery and are suitable for HPE NVDIMM-Ns.

    TABLE 16. Population guidelines for HPE NVDIMM-Ns in HPE ProLiant BL460c Gen10 server blades (part 1)

    CPU 1

    Persistent memory

    capacity1 (in GiB)

    Memory controller 1 Memory controller 2

    (with Smart Storage Battery Support)2

    Channel 3 Channel 2 Channel 1 Channel 4 Channel 5 Channel 6

    Regular DIMM + HPE NVDIMM-N count

    NI only3

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    1+0 R 0

    1+1 R N 16

    1+2 R N N 32

    2+0 R R 0

    2+1 R R N 16

    2+2 R R N N 32

    3+0 R R R 0

    3+1 R R R N 16

    3+2 R R R N N 32

    4+0 R R R R 0

    4+1 R R R R N 16

    4+2 R R N R R N 32

    6+0 R R R R R R 0

    6+1 R R R N R R R 16

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity that assumes 16 GiB NVDIMM-Ns are used. 2 The HPE Smart Storage battery supports a minimum of two HPE NVDIMM-Ns across all CPUs in the BL460c server blade. 3 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration.

    TABLE 17. Population guidelines for HPE NVDIMM-Ns in HPE BL460c Gen10 server blades (part 2)

    CPU 2

    Persistent memory capacity1

    (in GiB)

    Memory controller 2 Memory controller 1

    (with Smart Storage Battery Support)2

    Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    Regular DIMM + HPE NVDIMM-N

    count

    NI only3

    DIMM Slot 8

    DIMM Slot 7

    DIMM Slot 6

    DIMM Slot 5

    DIMM Slot 4

    DIMM Slot 3

    DIMM Slot 2

    DIMM Slot 1

    1+0 R 0

    1+1 R N 16

    1+2 R N N 32

    2+0 R R 0

    2+1 R R N 16

    2+2 R R N N 32

    3+0 R R R 0

    3+1 N R R R 16

    3+2 R R R N N 32

    4+0 R R R R 0

    4+1 R R R R N 16

    4+2 R R N R R N 32

    6+0 R R R R R R 0

    6+1 R R R N R R R 16

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 The HPE Smart Storage battery supports a minimum of two HPE NVDIMM-Ns across all CPUs in the BL460c server blade. 3 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration.

  • Technical white paper Page 25

    Population guidelines for HPE NVDIMM-Ns in HPE Synergy 480 and 660 Gen10 compute modules Since HPE NVDIMM-Ns are designed to unleash maximum system performance, systems using HPE NVDIMM-Ns should have one HPE SmartMemory DPC to provide maximum memory performance (i.e., six HPE SmartMemory DIMMs per processor). One to six HPE NVDIMM-Ns per processor (based on persistent memory capacity and performance needs) should then be added as described in Table 18. The HPE Smart Storage battery is connected to all DIMM slots and supports a maximum of twelve HPE NVDIMM-Ns across all CPUs.

    TABLE 18. Population guidelines for HPE NVDIMM-Ns with six HPE SmartMemory DIMMs attached to CPU 1 in HPE Synergy 480 or CPU 1 and CPU 2 in HPE Synergy 660 Gen10 compute modules

    SY480 CPU 1 or SY660 CPU 1/CPU 2

    Persistent memory

    capacity1 (in GiB)

    Nominal interleaved persistent

    memory bandwidth2

    (in GB/s)

    Memory controller 2 Memory controller 1

    Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    HPE NVDIMM-N

    count DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    0 R R R R R R 0 0

    1 R N R R R R R 16 21.33

    2 R N R N R R R R 32 42.67

    3 R N R N R N R R R 48 64

    4 R N R N R R N R N R 64 85.33

    53 R N R N R N R N R N R 80 Inconsistent

    6 R N R N R N N R N R N R 96 128

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Nominal persistent memory bandwidth assumes that the system is running at 2667 MT/s, NVDIMM-N interleaving is enabled, and that no bandwidth is being used for the

    HPE SmartMemory DIMMs. 3 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration.

    The population guidelines described in Table 19 are a mirror image of those in Table 18.

    TABLE 19. Population guidelines for HPE NVDIMM-Ns with six HPE SmartMemory DIMMs attached to CPU 2 in HPE Synergy 480 or CPU 3 and CPU 4 in HPE Synergy 660 Gen10 compute modules

    SY480 CPU 2 or SY660 CPU 3/CPU 4

    Persistent memory capacity1 (in GiB)

    Nominal interleaved persistent memory

    bandwidth2 (in GB/s)

    Memory controller 1 Memory controller 2

    Channel 3 Channel 2 Channel 1 Channel 4 Channel 5 Channel 6

    HPE NVDIMM-N

    count DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    0 R R R R R R 0 0

    1 R R R R R N R 16 21.33

    2 R R R R N R N R 32 42.67

    3 R R R N R N R N R 48 64

    4 R N R N R R N R N R 64 85.33

    53 R N R N R N R N R N R 80 Inconsistent

    6 R N R N R N N R N R N R 96 128

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Nominal bandwidth assumes that the system is running at 2667 MT/s and NVDIMM-N interleaving is enabled, and that no bandwidth is being used for the

    HPE SmartMemory DIMMs. 3 Non-interleaved—NVDIMM-N interleaving should be disabled if five NVDIMM-Ns are used.

  • Technical white paper Page 26

    Although Hewlett Packard Enterprise recommends that HPE NVDIMM-Ns be used with six HPE SmartMemory DIMMs per processor, they may also be combined with smaller quantities of HPE SmartMemory DIMMs as described in Tables 18 to 23 if regular memory performance is not as important or if larger persistent memory capacity is needed.

    TABLE 20. Extended population guidelines for HPE NVDIMM-Ns attached to CPU 1 in HPE Synergy 480 or CPU 1 and CPU 2 in HPE Synergy 660 Gen10 compute modules (part 1)

    SY480 CPU 1 or SY660 CPU 1/CPU 2

    Persistent memory

    capacity1 (in GiB)

    Memory controller 2 Memory controller 1

    Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    Regular DIMM + HPE NVDIMM-N count

    NI only2

    DIMM

    Slot 1

    DIMM

    Slot 2

    DIMM

    Slot 3

    DIMM

    Slot 4

    DIMM

    Slot 5

    DIMM

    Slot 6

    DIMM

    Slot 7

    DIMM

    Slot 8

    DIMM

    Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    1+0 R 0

    1+1 N R 16

    1+2 N N R 32

    1+3 N N N R 48

    1+4 N N R N N 64

    1+5 N N N R N N 80

    1+6 N N N N R N N 96

    1+7 N N N N N R N N 112

    1+8 N N N N N N R N N 128

    1+9 N N N N N N R N N N 144

    1+10 N N N N N N R N N N N 160

    1+11 N N N N N N N R N N N N 176

    2+0 R R 0

    2+1 N R R 16

    2+2 N N R R 32

    2+3 N N N R R 48

    2+43 N N R N R N

    64 N N N R R N

    2+5 N N N N R R N 80

    2+63 N N N N R N R N

    96 N N N N N N R R

    2+7 N N N N N N R R N 112

    2+8 N N N N N N R R N N 128

    2+9 N N N N N N R N R N N 144

    2+10 N N N N N N N R N R N N 160

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration. 3 Two configurations are recommended for this number of NVDIMM-Ns: one that works with NVDIMM-N interleaving either enabled or disabled, the other intended for NVDIMM-N

    interleaving to be disabled.

  • Technical white paper Page 27

    TABLE 21. Extended population guidelines for HPE NVDIMM-Ns attached to CPU 1 in HPE Synergy 480 or CPU 1 and CPU 2 in HPE Synergy 660 Gen10 compute modules (part 2)

    SY480 CPU 1 or SY660 CPU 1/CPU 2

    Persistent memory capacity1 (in GiB)

    Memory controller 2 Memory controller 1

    Channel 6 Channel 5 Channel 4 Channel 1 Channel 2 Channel 3

    Regular DIMM + HPE NVDIMM-N count

    NI only2

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    3+0 R R R 0

    3+1 N R R R 16

    3+2 N N R R R 32

    3+3 N N N R R R 48

    3+43 N N R R R 64 N N N N R R R

    3+5 N N N N N R R R 80

    3+6 N N N N R R R 96

    3+7 N N N N N N R R R 112

    3+8 N N N N N N R R R 128

    3+9 N N N N N N N R R R 144

    4+0 R R R R 0

    4+1 N R R R R 16

    4+2 N R R R R N 32

    4+33 N R N R N R R 48 N N R R R R N

    4+43 N R N R R N R N 64 N N R R R R N N

    4+5 N N R N R R R N N 80

    4+63 N R N R N N R N R N 96 N N R N R N R R N N

    4+7 N N R N R N R N R N N 112

    4+8 N N R N R N N R N R N N 128

    6+0 R R R R R R 0

    6+1 R N R R R R R 16

    6+2 R N R N R R R R 32

    6+3 R N R N R N R R R 48

    6+4 R N R N R R N R N R 64

    6+5 R N R N R N R N R N R 80

    6+6 R N R N R N N R N R N R 96

    8+04 R R R R R R R R 0

    8+14 R R R R R R R R N 16

    8+24 N R R R R R R R R N 32

    8+34 N R R R R R R R R N N 48

    8+44 N N R R R R R R R R N N 64

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration. 3 Two configurations are recommended for this number of NVDIMM-Ns: one that works with NVDIMM-N interleaving either enabled or disabled, the other intended for

    NVDIMM-N interleaving to be disabled. 4 Configurations with eight regular DIMMs use fewer channels and provide lower bandwidth for regular memory than the configurations with six regular DIMMs.

  • Technical white paper Page 28

    TABLE 22. Extended population guidelines for HPE NVDIMM-Ns attached to CPU 2 in HPE Synergy 480 or CPU 3 and CPU 4 in HPE Synergy 660 Gen10 compute modules (part 1)

    SY480 CPU 2 or SY660 CPU 3/CPU 4

    Persistent memory capacity1 (in GiB)

    Memory controller 1 Memory controller 2

    Channel 3 Channel 2 Channel 1 Channel 4 Channel 5 Channel 6

    Regular DIMM + HPE NVDIMM-N

    count

    NI only2

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    1+0 R 0

    1+1 R N 16

    1+2 R N N 32

    1+3 R N N N 48

    1+4 N N R N N 64

    1+5 N N R N N N 80

    1+6 N N R N N N N 96

    1+7 N N R N N N N N 112

    1+8 N N R N N N N N N 128

    1+9 N N N R N N N N N N 144

    1+10 N N N N R N N N N N N 160

    1+11 N N N N R N N N N N N N 176

    2+0 R R 0

    2+1 R R N 16

    2+2 R R N N 32

    2+3 R R N N N 48

    2+43 N R N R N N

    64 N R R N N N

    2+5 N R R N N N N 80

    2+63 N R N R N N N N

    96 R R N N N N N N

    2+7 N R R N N N N N N 112

    2+8 N N R R N N N N N N 128

    2+9 N N R N R N N N N N N 144

    2+10 N N R N R N N N N N N N 160

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration. 3 Two configurations are recommended for this number of NVDIMM-Ns: one that works with NVDIMM-N interleaving either enabled or disabled, the other intended for

    NVDIMM-N interleaving to be disabled.

  • Technical white paper Page 29

    TABLE 23. Extended population guidelines for HPE NVDIMM-Ns attached to CPU 2 in HPE Synergy 480 or CPU 3 and CPU 4 in HPE Synergy 660 Gen10 compute modules (part 2)

    SY480 CPU 2 or SY660 CPU 3/CPU 4

    Persistent memory capacity1 (in GiB)

    Memory controller 1 Memory controller 2

    Channel 3 Channel 2 Channel 1 Channel 4 Channel 5 Channel 6

    Regular DIMM + HPE NVDIMM-N count

    NI only2

    DIMM Slot 1

    DIMM Slot 2

    DIMM Slot 3

    DIMM Slot 4

    DIMM Slot 5

    DIMM Slot 6

    DIMM Slot 7

    DIMM Slot 8

    DIMM Slot 9

    DIMM Slot 10

    DIMM Slot 11

    DIMM Slot 12

    3+0 R R R 0

    3+1 R R R N 16

    3+2 R R R N N 32

    3+3 R R R N N N 48

    3+43 R N R N R N N 64 R R R N N N N

    3+5 R R R N N N N N 80

    3+6 R N R N R N N N 96

    3+7 R N R R N N N N N N 112

    3+8 R N R N R N N N N N N 128

    3+9 R N R N R N N N N N N N 144

    4+0 R R R R 0

    4+1 R R R R N 16

    4+2 N R R R R N 32

    4+33 R R N R N R N 48 N R R R R N N

    4+43 N R N R R N R N 64 N N R R R R N N

    4+5 N N R R R N R N N 80

    4+63 N R N R N N R N R N 96 N N R R N R N R N N

    4+7 N N R N R N R N R N N 112

    4+8 N N R N R N N R N R N N 128

    6+0 R R R R R R 0

    6+1 R R R R R N R 16

    6+2 R R R R N R N R 32

    6+3 R R R N R N R N R 48

    6+4 R N R N R R N R N R 64

    6+5 R N R N R N R N R N R 80

    6+6 R N R N R N N R N R N R 96

    8+04 R R R R R R R R 0

    8+14 R R R R R R R R N 16

    8+24 N R R R R R R R R N 32

    8+34 N R R R R R R R R N N 48

    8+44 N N R R R R R R R R N N 64

    Key: R = Regular DIMM (i.e., an HPE SmartMemory DIMM), N = HPE NVDIMM-N. 1 Persistent memory capacity assumes that 16 GiB NVDIMM-Ns are used. 2 Non-interleaved—NVDIMM-N interleaving should be disabled for this configuration. 3 Two configurations are recommended for this number of NVDIMM-Ns: one that works with NVDIMM-N interleaving either enabled or disabled, the other intended for

    NVDIMM-N interleaving to be disabled. 4 Configurations with eight regular DIMMs use fewer channels and provide lower bandwidth for regular memory than the configurations with six regular DIMMs.

  • Technical white paper Page 30

    APPENDIX D—POPULATION GUIDELINES FOR HPE PERSISTENT MEMORY For data-intensive workloads where latency and capacity are key considerations, HPE Apollo, HPE ProLiant, and HPE Synergy servers deliver faster data access at a reasonable price point when equipped with HPE Persistent Memory 128, 256, or 512 GB modules featuring Intel Optane DC Persistent Memory. This new persistent memory offering, based on phase-change memory technology, must be included alongside HPE SmartMemory DIMMs.

    DIMMs and HPE Persistent Memory modules are installed in specific configurations based on the workload requirements of the server. Supported configurations are optimized for persistent memory capacity, volatile memory capacity, and performance.

    • Persistent memory capacity—the available capacity is equal to the HPE Persistent Memory capacity. • Volatile memory capacity

    – App Direct (1 LM) mode—the volatile capacity is equal to the DIMM capacity.

    – Memory (2 LM) mode—the volatile capacity is some or all of the HPE Persistent Memory capacity. • Performance

    – Uses all channels to efficiently utilize processor resources. – Memory (2 LM) mode—more regular DIMMs provide a better cache ratio.

    TABLE 24. DIMMs and HPE Persistent Memory modules can be installed in the server in the following configurations: Note: The following population rule only applies to DL360, DL380, DL560, DL580, SY480, and SY660

    Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 Slot 9 Slot 10 Slot 11 Slot 12

    6+6 D P D P D P P D P D P D 4+6 D D P D P P D P D D 2+8 P D D D D D D D D P 2+6 D D D P P D D D 2+4 P D D D D P 1+6 D D D D D P D

    HPE Persistent Memory (P) DIMMs (D)

    NOTE P00918-B21—HPE 8GB 1Rx8 2933 RDIMM does not support pairing with HPE Persistent Memory.

    6+6 configuration This configuration is also referred to as a 2-2-2 configuration, based on the number of modules populated per channel, per memory controller. This configuration is symmetric and uses all slots. It provides the best bandwidth for both DRAM DIMMs and HPE Persistent Memory, using all the channels and pins on the processor. Since DRAM DIMMs and HPE Persistent Memory modules share channels, there might be some competition for access; the HPE Persistent Memory module will slow down the DRAM compared to a system without HPE Persistent Memory.

  • Technical white paper Page 31

    FIGURE 14. HPE Persistent Memory capacity up to 3 TB using 512 GB modules and DIMM capacity up to 768 GB using 128 GB DIMMs

    NOTE Processor 2 in SY480 and processor 3/4 in SY660 will have reversed slot numbers

    4+6 configuration This configuration is also referred to as a 2-2-1 configuration (that nomenclature does not consider whether there are two HPE Persistent Memory modules or one DIMM and one HPE Persistent Memory module in a channel). By using all channels, this configuration provides the best performance for DIMMs. It offers less capacity for HPE Persistent Memory. Since DIMMs and HPE Persistent Memory modules share four channels, they might compete for access.

    FIGURE 15. HPE Persistent Memory capacity up to 2 TB using 512 GB modules and DIMM capacity up to 768 GB using 128 GB DIMMs

    NOTE Processor 2 in SY480 and processor 3/4 in SY660 will have reversed slot numbers

  • Technical white paper Page 32

    2+8 configuration This configuration is also referred to as a 2-2-1 configuration (that nomenclature does not consider whether there are two DIMMs or one DIMM and one HPE Persistent Memory module in a channel). This configuration does not support Memory (2 LM) mode because the capacity of the DIMMs is likely too large to use as a cache for the small number of HPE Persistent Memory modules. This configuration offers the largest regular DIMM capacity but only offers 66% of the possible DIMM bandwidth, since DIMMs are not installed on all channels.

    NOTE Only App Direct mode is supported with 2+8 configuration.

    FIGURE 16. HPE Persistent Memory capacity up to 1 TB using 512 GB HPE Persistent Memory and DIMM capacity up to 1 TB using 128 GB DIMMs

    NOTE Processor 2 in SY480 and processor 3/4 in SY660 will have reversed slot numbers

  • Technical white paper Page 33

    2+6 configuration This configuration is also referred to as a 2-1-1 configuration. Two channels are shared, so there is still some competition between DRAM and HPE Persistent Memory traffic.

    NOTE LRDIMMs are not supported in Memory Mode with 2+6 configuration.

    FIGURE 17. HPE Persistent Memory capacity up to 1 TB using 512 GB HPE Persistent Memory and DIMM capacity up to 768 GB using 128 GB DIMMs

    NOTE Processor 2 in SY480 and processor 3/4 in SY660 will have reversed slot numbers

    2+4 configuration This configuration is also referred to as a 1-1-1 configuration. Although HPE Persistent Memory traffic stays out of the way of DIMM traffic, the regular DIMMs are only interleaved four ways and provide less bandwidth than a +6 configuration.

    FIGURE 18. HPE Persistent Memory capacity up to 1 TB using 512 GB HPE Persistent Memory and DIMM capacity up to 512 GB using 128 GB DIMMs

  • Technical white paper Page 34

    1+6 configuration This configuration is also referred to as a 1-1-1 asymmetric configuration. It offers the smallest HPE Persistent Memory capacity. This configuration does not support memory (2 LM) mode because 2 LM mode requires symmetrical population under each memory controller.

    NOTE Only App Direct mode is supported with 1+6 configuration.

    FIGURE 19. HPE Persistent Memory capacity up to 512 GB using 512 GB HPE Persistent Memory and DIMM capacity up to 768 GB using 128 GB DIMMs

    NOTE Processor 2 in SY480 and processor 3/4 in SY660 will have reversed slot numbers

    TABLE 25. DIMMs and HPE Persistent Memory modules can be installed in the server in the following configurations: Note: The following population rule only applies to HPE Apollo 4200 and Apollo 2000 Gen10 servers.

    Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8

    2+6 D D D P P D D D 2+4 P D D D D P 1+6 D D D P D D D

    HPE Persistent Memory (P) DIMMs (D)

    NOTE P00918-B21—HPE 8GB 1Rx8 2933 RDIMM does not support pairing with HPE Persistent Memory.

    NOTE LRDIMMs are not supported in Memory Mode with 2+6 configuration.

  • Technical white paper Page 35

    2+6 configuration This configuration is also referred to as a 2-1-1 configuration. Two channels are shared, so there is still some competition between DRAM and HPE Persistent Memory traffic.

    FIGURE 20. HPE Persistent Memory capacity up to 1 TB using 512 GB HPE Persistent Memory and DIMM capacity up to 768 GB using 128 GB DIMMs

    2+4 configuration This configuration is also referred to as a 1-1-1 configuration. Although HPE Persistent Memory traffic stays out of the way of DIMM traffic, the regular DIMMs are only interleaved four ways and provide less bandwidth than a +6 configuration.

    FIGURE 21. HPE Persistent Memory capacity up to 1 TB using 512 GB HPE Persistent Memory and DIMM capacity up to 512 GB using 128 GB DIMMs

  • Technical white paper Page 36

    1+6 configuration This configuration is also referred to as a 1-1-1 asymmetric configuration. It offers the smallest HPE Persistent Memory capacity. This configuration does not support memory (2 LM) mode because 2 LM mode requires symmetrical population under each memory controller.

    NOTE Only App Direct mode is supported with 1+6 configuration.

    FIGURE 22. HPE Persistent Memory capacity up to 512 GB using 512 GB HPE Persistent Memory and DIMM capacity up to 768 GB using 128 GB DIMMs

  • Technical white paper

    Make the right purchase decision. Contact our presales specialists.

    Get updates

    © Copyright 2017–2021 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

    Intel Xeon and Intel Optane DC are trademarks of Intel Corporation in the U.S. and other countries. All third-party marks are property of their respective owners.

    a00017079ENW, June 2021, Rev. 13

    APPENDIX E—BALANCED POPULATION RULES VERSUS RAS POPULATION RULES Refer to the HPE Server Memory RAS white paper when enabling RAS features. Below is an example of a RAS population that differs from the balanced population for performance.

    FIGURE 23. RAS population that differs from balanced population

    Resources HPE Server technical white papers library

    HPE Server Memory

    HPE Persistent Memory

    HPE Memory Configurator

    HPE Server Memory whiteboard video

    LEARN MORE AT hpe.com/info/memory

    http://www.hpe.com/info/getupdatedhttps://www.hpe.com/global/hpechat/index.html?jumpid=Collaterals_a00017079ENWmailto:[email protected]://www.hpe.com/us/en/contact-hpe.htmlhttp://www.hpe.com/info/getupdatedhttps://www.hpe.com/psnow/doc/4aa4-3490enwhttp://www.hpe.com/docs/servertechnologyhttp://www.hpe.com/info/memoryhttps://www.hpe.com/us/en/servers/persistent-memory.htmlhttp://www.hpe.com/servers/servermemoryconfiguratorhttps://www.hpe.com/h22228/video-gallery/us/en/EB2D5B6E-B6CB-435E-AB08-B42DA06A9962/hpesmartmemory/video/http://www.hpe.com/info/memory

    IntroductionPopulating HPE DDR4 DIMMs in HPE Gen10 serversIntroduction to DIMM slot locationsPopulation guidelines for HPE SmartMemory DIMMsPopulation guidelines for HPE Persistent Memory

    Memory interleavingRank interleavingChannel interleavingMemory controller interleavingNode interleavingDisabling memory interleaving

    Understanding balanced DIMM configurationsMemory configurations that are unbalanced across channelsMemory configurations that are unbalanced across processors

    Memory RAS mode and population requirementsConclusionAppendix A—HPE Gen10 DIMM slot locationsDIMM slot locations in HPE ProLiant DL360/DL380/DL560/DL580/XL270d Gen10 serversDIMM slot locations in HPE ProLiant ML350 Gen10 serversDIMM slot locations in HPE Synergy 480 Gen10 compute modulesDIMM slot locations in HPE Synergy 660 Gen10 compute modulesDIMM slot locations in HPE ProLiant BL460c Gen10 server blades and HPE ProLiant XL170r/XL190r/ XL230k/XL450 Gen10 serversDIMM slot locations in HPE Apollo 4200 Gen10 servers (XL420 Gen10)DIMM slot locations in HPE ProLiant ML110 Gen10 servers

    Appendix B—Population guidelines for HPE SmartMemory DIMMsPopulation guidelines for HPE SmartMemory DIMMs in HPE ProLiant DL360/DL380/DL560/DL580/ ML350/XL270d Gen10 servers

    Population guidelines for HPE SmartMemory DIMMs in HPE Synergy 480 Gen10 compute modulesPopulation guidelines for HPE SmartMemory DIMMs in HPE Synergy 660 Gen10 compute modulesPopulation guidelines for HPE SmartMemory DIMMs in HPE ProLiant BL460c Gen10 server blades and XL170r/XL190r/XL230k/XL450 Gen10 serversPopulation guidelines for HPE SmartMemory DIMMs in HPE ProLiant ML110 Gen10 serversMixed HPE SmartMemory DIMM configurations

    Appendix C—Population guidelines for HPE NVDIMM-NsPopulation guidelines for HPE NVDIMM-Ns in HPE ProLiant DL360/DL380/DL560/DL580 Gen10 serversPopulation guidelines for HPE NVDIMM-Ns in HPE ProLiant BL460c Gen10 server bladesPopulation guidelines for HPE NVDIMM-Ns in HPE Synergy 480 and 660 Gen10 compute modules

    Appendix D—Population guidelines for HPE Persistent Memory6+6 configuration4+6 configuration2+8 configuration2+6 configuration2+4 configuration1+6 configuration2+6 configuration2+4 configuration1+6 configuration

    Appendix E—Balanced population rules versus RAS population rulesResourcesLearn more athpe.com/info/memory

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages false /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 600 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages false /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure true /IncludeBookmarks true /IncludeHyperlinks true /IncludeInteractive true /IncludeLayers false /IncludeProfiles true /MarksOffset 6 /MarksWeight 0.250000 /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PageMarksFile /RomanDefault /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /LeaveUntagged /UseDocumentBleed false >> > ]>> setdistillerparams> setpagedevice