IBM Flex System PCIe Expansion Node

Product Guide


Lenovo is open and ready for business with System x
IBM x86 products are now products of Lenovo in the U.S. and other countries. IBM will continue to host x86-related content on ibm.com until migrated to Lenovo.
Learn more about the acquisition

Abstract

The IBM® Flex System™ PCIe Expansion Node provides the ability to attach additional PCI Express cards, such as High IOPS SSD adapters, fabric mezzanine cards, and next-generation graphics processing units (GPU), to supported IBM Flex System compute nodes. This capability is ideal for many applications that require high performance I/O, special telecommunications network interfaces, or hardware acceleration using a PCI Express GPU card. The PCIe Expansion Node supports up to four PCIe adapters and two additional Flex System I/O expansion adapters.

Changes in the October 29 update:
* Further clarified that NVIDIA Kxx GPUs are only supported when the attached compute node has 1TB or less of system memory installed

Contents


The IBM® Flex System™ PCIe Expansion Node provides the ability to attach additional PCI Express cards, such as High IOPS SSD adapters, fabric mezzanine cards, and next-generation graphics processing units (GPU), to supported IBM Flex System compute nodes. This capability is ideal for many applications that require high performance I/O, special telecommunications network interfaces, or hardware acceleration using a PCI Express GPU card. The PCIe Expansion Node supports up to four PCIe adapters and two additional Flex System I/O expansion adapters.

Figure 1 shows the IBM Flex System PCIe Expansion Node attached to an IBM Flex System x240 Compute Node.

IBM Flex System PCIe Expansion Node (right) attached to an x240 Compute Node (left)
Figure 1. IBM Flex System PCIe Expansion Node (right) attached to an x240 Compute Node (left)


Did you know?

The PCIe Expansion Node is ideal for application environments that are written to take advantage of acceleration and visualization performance using GPUs that are connected to Flex System Compute nodes. It is also useful for environments that require specific PCIe adapter connectivity to a Flex System Compute node.


Part number information

Table 1. Ordering part number and feature code
DescriptionPart numberFeature code
IBM Flex System PCIe Expansion Node81Y8983A1BV

The part number includes the following items:
  • IBM Flex System PCIe Expansion Node
  • Two riser assemblies
  • Interposer cable assembly
  • Double-wide shelf
  • Two auxiliary power cables (for adapters that require additional +12V power)
  • Four removable PCIe slot air flow baffles
  • Documentation CD that contains the Installation and Service Guide
  • Warranty information and Safety flyer and Important Notices document


Supported servers

The IBM Flex System PCIe Expansion Node is supported when it is attached to the IBM Flex System compute nodes listed in Table 2. Only one Expansion Node can be attached to each compute node.

Table 2. Supported servers
DescriptionPart number
x220 (7906)
x222 (7916)
x240 (8737)
x240 M5 (9532)
x440 (7917)
p24L (1457)
p260 (7895)
p270 (7954)
p460 (7895)
IBM Flex System PCIe Expansion Node81Y8983
Y*
N
Y*
Y†
N
N
N
N
N
* The PCIe Expansion Node requires that both processors be installed in the x220 and x240.
† Support for the x240 M5 with NVIDA adapters only when the compute node has 1 TB or less of memory installed.


Features

The PCIe Expansion Node has the following features:
  • Support for up to four standard PCIe 2.0 adapters:
    • Two PCIe 2.0 x16 slots that support full-length, full-height adapters (1x, 2x, 4x, 8x, and 16x adapters supported)
    • Two PCIe 2.0 x8 slots that support low-profile adapters (1x, 2x, 4x, and 8x adapters supported)
  • Support for PCIe 3.0 adapters by operating them in PCIe 2.0 mode
  • Support for one full-length, full-height double-wide adapter (consuming the space of the two full-length, full-height adapter slots)
  • Support for PCIe cards with higher power requirements

    The Expansion Node provides two auxiliary power connections, up to 75W each for a total of 150W of additional power using standard 2x3, +12V six-pin power connectors. These connectors are placed on the base planar so that they both can provide power to a single adapter card (up to 225W), or to two adapters (up to 150W each). Power cables are used to connect from these connectors to the PCIe adapters and are included with the PCIe Expansion Node.

  • Two Flex System I/O expansion connectors

    The I/O expansion connectors are labeled I/O expansion 3 connector and I/O expansion 4 connector in Figure 2. These I/O connectors expand the I/O capability of the attached compute node.

The layout of the PCIe Expansion Node is shown in Figure 2. The four PCIe slots are routed through two riser connectors on the system planar.


Figure 2. Layout of the PCIe Expansion Node

Figure 3 shows a top-down view of the PCIe Expansion Node connected to the x240 Compute Node.


Figure 3. Open view of the PCIe Expansion Node (bottom) connected to the x240 Compute Node (top).


Architecture

Figure 4 shows the architecture of the PCIe Expansion Node when connected to a compute node.


Figure 4. Architecture of the PCIe Expansion Node

The Expansion Node connects to a standard-width compute node using the interposer cable which plugs into the expansion connector and interposer connector on the compute node and Expansion Node respectively. This link forms a PCIe 2.0 x16 connection between the compute node and the PCIe switch in the Expansion Node. The PCIe switch has connections to the six PCIe slots in the Expansion Node:
  • PCIe 2.0 x16 connections to the two full-length full-height PCIe slots
  • PCIe 2.0 x8 connections to the two low-profile PCIe slots
  • PCIe 2.0 x16 connectors to the two Flex System I/O expansion slots (labeled I/O 3 and I/O 4 in the figure).

Notes:
  • In compute nodes, such as the x220 and x240, I/O expansion slot 1 and 2 in the server operate at PCIe 3.0 speeds. However, I/O expansion slots 3 and 4 in the PCIe Expansion Node (and also the four standard PCIe slots) operate at PCIe 2.0 speeds.
  • The expansion connector in the compute node is routed through processor 2. Therefore, processor 2 must be installed in the compute node.

When adapters are installed in the two I/O expansion slots, they connect to the chassis midplane and provide additional connections to I/O module bays in the chassis. Table 3 shows the connections between the expansion slots and the module bays. Note that the ports available from the adapter vary, depending on whether the adapter is a 2-port or 4-port adapter. Similarly, the number of ports to the I/O module depend on the number of ports activated in the switch.

Table 3. Adapter-to-I/O bay correspondence
I/O expansion slotPort on the adapterCorresponding I/O module bay
in the chassis
Slot 1
(Compute Node)
Port 1Module bay 1
Port 2Module bay 2
Port 3*Module bay 1**
Port 4*Module bay 2**
Slot 2
(Compute Node)
Port 1Module bay 3
Port 2Module bay 4
Port 3*Module bay 3**
Port 4*Module bay 4**
Slot 3
(Expansion Node)
Port 1Module bay 1
Port 2Module bay 2
Port 3*Module bay 1**
Port 4*Module bay 2**
Slot 4
(Expansion Node)
Port 1Module bay 3
Port 2Module bay 4
Port 3*Module bay 3**
Port 4*Module bay 4**
* Ports 3 and 4 require that a four-port card be installed in the expansion slot.
** Might require one or more port upgrades to be installed in the I/O module.

Supported PCIe adapter cards

The Expansion Node supports the following general adapter characteristics:

  • Full-height cards, 4.2 in (107 mm)
  • Low-profile cards, 2.5 in (64 mm)
  • Half-length cards, 6.6 in (168 mm)
  • Full-length cards, 12.3 in (312 mm)
  • Support for up to four low-profile PCIe cards
  • Support for up to two full-height PCIe cards
  • Support for up to one full-height double-wide PCIe card
  • Support for PCIe standards 1.1 and 2.0 (PCIe 3.0 adapters supported operating in PCIe 2.0)

The front-facing bezel of the Expansion Node is inset from the normal face of the compute nodes. This inset is to allow for the use of cables connected to PCIe adapter cards that support external connectivity. The Expansion Node provides up to 80 mm of space in the front of the PCIe adapter cards to allow for the bend radius of these cables.

Table 4 lists the PCIe adapters that are supported in the Expansion Node. Some adapters must be installed in one of the full-height slots as noted. If the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used, however.

Note: If an NVIDIA Grid K1, Grid K2, Tesla K20, or Tesla K40 is installed, the maximum system memory that can be installed is 1 TB. See http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5096047 for details.

Table 4. Supported adapter cards
Part numberFeature codeDescriptionMaximum
supported
High IOPS storage adapters
46C9078A3J3IBM 365GB High IOPS MLC Mono Adapter4
46C9081A3J4IBM 785GB High IOPS MLC Mono Adapter4
81Y4519*5985640GB High IOPS MLC Duo Adapter (full-height adapter)2
81Y4527*A1NB1.28TB High IOPS MLC Duo Adapter (full-height adapter)2
90Y4377A3DYIBM 1.2TB High IOPS MLC Mono Adapter (low-profile adapter)2
90Y4397A3DZIBM 2.4TB High IOPS MLC Duo Adapter (full-height adapter)2
GPU and co-processor adapters
94Y5960A1R4NVIDIA Tesla M2090 (full-height adapter)1**
47C2120A4F1NVIDIA GRID K1 for IBM Flex System PCIe Expansion Node1†§
47C2121A4F2NVIDIA GRID K2 for IBM Flex System PCIe Expansion Node1†§
47C2119A4F3NVIDIA Tesla K20 for IBM Flex System PCIe Expansion Node1†§
47C2137A5HDNVIDIA Tesla K40 for IBM Flex System PCIe Expansion Node1†§
47C2122A4F4Intel Xeon Phi 5110P for IBM Flex System PCIe Expansion Node1†
None4809‡IBM 4765 Crypto Card (full-height adapter)2
* Withdrawn from marketing
** If the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. If installed, only this adapter is supported in the expansion node. No other PCIe adapters may be selected.
† The K1, K2, K20, K40 and 5110P adapters are double-wide cards and occupy the two full-height PCIe slots. If installed, the adjacent slot is unavailable, however adapters can be installed in the two low-profile slots.
‡ Orderable as separate MTM 4765-001 feature 4809. Available via AAS (e-config) only.
§ NVIDIA GRID Kx and Tesla Kxx adapters supported only in servers with 1 TB or less memory installed.

Consult the IBM ServerProven® site for the current list of adapter cards that are supported in the Expansion Node:
http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html

For information about the IBM High IOPS adapters, see the list of Internal Storage Product Guides from IBM Redbooks:
http://www.redbooks.ibm.com/portals/systemx?Open&page=pg&cat=internalstorage

Note: Although the design of Expansion Node allows for a much greater set of standard PCIe adapter cards, the preceding table lists the adapters that are specifically supported. If the PCI Express adapter that you require is not on the ServerProven web site, use the IBM ServerProven Opportunity Request for Evaluation (SPORE) process to confirm compatibility in the desired configuration.


Supported I/O expansion cards

Table 5 lists the Flex System I/O expansion cards that are supported in the PCIe Expansion Node.

Table 5. Supported I/O expansion cards
Part numberFeature codeDescriptionSupported in the PEN
Networking
90Y3482A3HKIBM Flex System EN6132 2-port 40Gb Ethernet AdapterSupported
88Y5920A4K3IBM Flex System CN4022 2-port 10Gb Converged AdapterSupported
90Y3554A1R1IBM Flex System CN4054 10Gb Virtual Fabric AdapterSupported*
00Y3306A4K2IBM Flex System CN4054R 10Gb Virtual Fabric AdapterSupported†
90Y3558A1R0IBM Flex System CN4054 Virtual Fabric Adapter (SW Upgrade)Supported
49Y7900A10YIBM Flex System EN2024 4-port 1Gb Ethernet AdapterSupported
90Y3466A1QYIBM Flex System EN4132 2-port 10Gb Ethernet AdapterSupported
90Y3454A1QZIBM Flex System IB6132 2-port FDR InfiniBand AdapterSupported
Storage
88Y6370A1BPIBM Flex System FC5022 2-port 16Gb FC AdapterSupported
95Y2386A45RIBM Flex System FC5052 2-port 16Gb FC AdapterSupported
95Y2391A45SIBM Flex System FC5054 4-port 16Gb FC AdapterSupported
69Y1942A1BQIBM Flex System FC5172 2-port 16Gb FC AdapterSupported
69Y1938A1BMIBM Flex System FC3172 2-port 8Gb FC AdapterSupported
95Y2375A2N5IBM Flex System FC3052 2-port 8Gb FC AdapterSupported
* The CN4054 is only supported in the x220 and the x240 with E5-2600 v1 processors
† The CN4054R is not supported in the x220 nor in the x240 with E5-2600 v1 processors

Consult the IBM ServerProven site for the current list of adapter cards that are supported in the Expansion Node:
http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html

For information about these adapters, see the IBM Redbooks Product Guides for Flex System in the Adapters category:
http://www.redbooks.ibm.com/portals/puresystems?Open&page=pg&cat=adapters


Physical specifications

Dimensions and weight (approximate):
  • Height: 56 mm (2.2 in)
  • Depth: 489 mm (19.25 in)
  • Width: 217 mm (8.6 in)
  • Maximum weight: 5.4 kg (11.9 lb)

Shipping dimensions and weight (approximate):
  • Height: 240 mm (9.5 in)
  • Depth: 680 mm (26.8 in)
  • Width: 601 mm (23.7 in)
  • Weight: 9.5 kg (21 lb)


Operating environment

When the unit is powered on, it is supported in the following environment:
  • Temperature: 5° C to 40° C (41° F to 104° F)
  • Humidity, noncondensing: -12° C dew point (10° F) and 8% - 85% relative humidity
  • Maximum dew point: 24° C (75° F)
  • Maximum altitude: 3048 m (10,000 ft)
  • Maximum rate of temperature change: 5° C/hr (41° F/hr)


Regulatory compliance

The unit conforms to the following standards:
  • ASHRAE Class A3
  • FCC - Verified to comply with Part 15 of the FCC Rules Class A
  • Canada ICES-004, issue 3 Class A
  • UL/IEC 60950-1
  • CSA C22.2 No. 60950-1
  • NOM-019
  • Argentina IEC 60950-1
  • Japan VCCI, Class A
  • IEC 60950-1 (CB Certificate and CB Test Report)
  • China CCC (GB4943); (GB9254, Class A); (GB17625.1)
  • Taiwan BSMI CNS13438, Class A; CNS14336
  • Australia/New Zealand AS/NZS CISPR 22, Class A
  • Korea KN22, Class A, KN24
  • Russia/GOST ME01, IEC 60950-1, GOST R 51318.22, GOST R
  • 51318.249, GOST R 51317.3.2, GOST R 51317.3.3
  • IEC 60950-1 (CB Certificate and CB Test Report)
  • CE Mark (EN55022 Class A, EN60950-1, EN55024, EN61000-3-2,
  • EN61000-3-3)
  • CISPR 22, Class A
  • TUV-GS (EN60950-1/IEC 60950-1, EK1-ITB2000)


Related publications
For more information, see the following resources:

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
07 August 2012

Last Update
29 October 2014


Rating:
(based on 2 reviews)


Author(s)

IBM Form Number
TIPS0906