2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter

IBM Redbooks Product Guide

Abstract

The 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter is a dual port InfiniBand Host Channel Adapter (HCA) based on proven Mellanox ConnectX IB technology. This HCA, when combined with the QDR switch, delivers end-to-end 40 Gb bandwidth per port. This solution is ideal for low latency, high bandwidth, performance-driven server and storage clustering applications in a High Performance Compute environment.

Contents


The 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter is a dual port InfiniBand Host Channel Adapter (HCA) based on proven Mellanox ConnectX IB technology. This HCA, when combined with the QDR switch, delivers end-to-end 40 Gb bandwidth per port. This solution is ideal for low latency, high bandwidth, performance-driven server and storage clustering applications in a High Performance Compute environment. The adapter uses the CFFh form factor and can be combined with a CIOv or CFFv adapter to get additional SAS, Fibre Channel, or Ethernet ports.

Figure 1 shows the expansion card.

2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter
Figure 1. 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter

Did you know?

InfiniBand is a scalable high performance fabric that was used for Petascale computing. Roadrunner is the largest supercomputer of the world, breaking the barrier of 1000 trillion operations per second. Roadrunner is based on Mellanox ConnectX DDR adapters. QDR is the next generation, which offers twice the bandwidth per port.

Part number information

Table 1 shows the part numbers to order the 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter.

Table 1. Part number and feature code for ordering
DescriptionPart numberFeature codes
2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter46M60010056

The part number includes the following items:
  • One 2-Port 40 Gb InfiniBand Expansion Card (CFFh)
  • Documentation CD-ROM
  • Important Notices flyer


Features and specifications

The 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter includes the following features and specifications:
  • The 2-Port 40 Gb InfiniBand Expansion Card features include:
    • Form-factor: CFFh
    • Host interface: PCI-E x8 Gen 2 (5.0GT/s): 40+40 Gbps bidirectional bandwidth
    • Dual 4X InfiniBand ports at speeds 10 Gbps, 20 Gbps, or 40 Gbps per port
    • 6.5 GBps bidirectional performance
    • RDMA, Send/Receive semantics
    • Hardware-based congestion control
    • Atomic operations
    • 16 million I/O channels
    • 256 to 4 Kb MTU
    • 1 GB messages
    • 9 virtual lanes: 8 data + 1 management
    • 1us MPI ping latency
    • CPU offload of transport operations
    • End-to-end QoS and congestion control
    • TCP/UDP/IP stateless offload
  • Enhanced InfiniBand features
    • Hardware-based reliable transport
    • Hardware-based reliable multicast
    • Extended Reliable Connected transport
    • Enhanced Atomic operations
    • Fine grained end-to-end QoS
  • Hardware-based I/O virtualization features
    • Single Root IOV
    • Address translation and protection
    • Multiple queues per virtual machine
    • VMware NetQueue support
  • Protocol support
    • Open MPI, OSU MVAPICH, HP MPI, Intel MPI, MS MPI, Scali MPI
    • IPoIB, SDP, RDS
    • SRP, iSER, FCoIB and NFS RDMA

Operating environment

2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter supports the following environment:
  • Temperature
    • 10 to 52 °C (50 to 125.6 F) at an altitude of 0 to 914 m (0 to 3,000 ft)
    • 10 to 49 °C (50 to 120.2 F) at an altitude of 0 to 3000 m (0 to 10,000 ft)
  • Relative humidity
    • 8% to 80% (noncondensing)

Supported servers and I/O modules

Table 2 lists the IBM BladeCenter servers that the 2-Port 40 Gb InfiniBand Expansion Card for IBM BladeCenter supports.

Table 2. Supported servers

Expansion card

HS12

HS21

HS21 XM

HS22

HS22V

HX5

LS21

LS22

LS41

LS42

JS12

JS21

JS22

JS23/JS43

PS700/PS701/PS702

2-Port 40 Gb InfiniBand Expansion Card (CFFh)

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

N

N

N

N

N

Figure 2 shows where the CFFh card is installed in a BladeCenter server.

Location on the BladeCenter server planar where the CFFh card is installed
Figure 2. Location on the BladeCenter server planar where the CFFh card is installed

IBM BladeCenter chassis support is based on the blade server type in which the expansion card is installed. Consult ServerProven to see which chassis each blade server type is supported in: http://ibm.com/servers/eserver/serverproven/compat/us/.

Table 3 lists the I/O modules that can be used to connect to the 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter. The I/O modules listed in Table 3 are supported in BladeCenter H chassis only.

Table 3. I/O modules supported with the 2-Port 40 Gb InfiniBand Expansion Card (CFFh)

I/O module

Part number

BladeCenter S

BladeCenter E

BladeCenter H

BladeCenter T

BladeCenter HT

MSIM

MSIM-HT

Voltaire 40 Gb InfiniBand Switch Module

46M6005

N

N

Y

N

N

N

N

In BladeCenter H, the ports of CFFh cards are routed through the midplane to I/O bays 7, 8, 9, and 10, as shown in Figure 3.

IBM BladeCenter H I/O topology showing the I/O paths from CFFh expansion cards
Figure 3. IBM BladeCenter H I/O topology showing the I/O paths from CFFh expansion cards

One I/O module must be installed in the chassis for each 4X InfiniBand port that you wish to use on the expansion card. The specific I/O bays in the chassis are listed in Table 4. For the 2-Port 40 Gb InfiniBand Expansion Card (CFFh), you should install an I/O module in I/O bays 7/8 and 9/10 (that is, two I/O modules, each of them occupies two adjacent high-speed slots).

Table 4. Locations of I/O modules required to connect to the expansion card
Expansion cardI/O bay 7I/O bay 8I/O bay 9I/O bay 10
2-Port 40 Gb InfiniBand Expansion Card (CFFh)
Supported I/O module*
Supported I/O module*
* A single Voltaire 40 Gb InfiniBand Switch Module occupies two adjacent high-speed bays (7 and 8 or 9 and 10) while expansion cards have only two ports--one port per one InfiniBand module.

Popular configurations

Figure 4 shows the use of Voltaire 40 Gb InfiniBand Switch Module to route two 4X InfiniBand ports from 2-Port 40 Gb InfiniBand Expansion Card (CFFh) installed into each server. Two Voltaire 40 Gb InfiniBand Switch Modules are installed in bays 7/8 and bays 9/10 of the BladeCenter H chassis. All connections between the expansion cards and the switch modules are internal to the chassis. No cabling is needed.

A 40 Gb solution using 2-Port 40 Gb InfiniBand Expansion Card (CFFh) and Voltaire 40 Gb InfiniBand Switch Modules
Figure 4. A 40 Gb solution using 2-Port 40 Gb InfiniBand Expansion Card (CFFh) and Voltaire 40 Gb InfiniBand Switch Modules

Table 5 lists he components that this configuration uses.

Table 5. Components used when connecting 2-Port 40 Gb InfiniBand Expansion Card (CFFh) to two Voltaire 40 Gb InfiniBand Switch Modules
Diagram referencePart number/machine typeDescriptionQuantity
1VariesIBM BladeCenter HS22 or other supported server1 to 14
246M60012-Port 40 Gb InfiniBand Expansion Card (CFFh)1 per server
38852BladeCenter H1
446M6005Voltaire 40 Gb InfiniBand Switch Module2
549Y99803 m Copper QDR InfiniBand QSFP CableUp to 32*
* The Voltaire 40 Gb InfiniBand Switch Module has 16 external ports. To communicate outside of the chassis, you must have QSFP cables connected. You have the flexibility to expand bandwidth using from one to 16 connections per switch.

Operating system support

The 2-Port 40 Gb InfiniBand Expansion Card (CFFh) supports the following operating systems:
  • Microsoft Windows Server 2008 R2
  • Microsoft Windows Server 2008, Datacenter x64 Edition
  • Microsoft Windows Server 2008, Datacenter x86 Edition
  • Microsoft Windows Server 2008, Enterprise x64 Edition
  • Microsoft Windows Server 2008, Enterprise x86 Edition
  • Microsoft Windows Server 2008, Standard x64 Edition
  • Microsoft Windows Server 2008, Standard x86 Edition
  • Microsoft Windows Server 2008, Web x64 Edition
  • Microsoft Windows Server 2008, Web x86 Edition
  • Red Hat Enterprise Linux 4 AS for AMD64/EM64T
  • Red Hat Enterprise Linux 4 AS for x86
  • Red Hat Enterprise Linux 5 Server Edition
  • SUSE LINUX Enterprise Server 10 for AMD64/EM64T
  • SUSE LINUX Enterprise Server 11 for AMD64/EM64T

Support for operating systems is based on the combination of the expansion card and the blade server in which it is installed. See IBM ServerProven for the latest information about the specific versions and service packs supported: http://ibm.com/servers/eserver/serverproven/compat/us/. Select the blade server, and then select the expansion card to see the supported operating systems.

Related publications
For more information, refer to the following documents:

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
30 June 2009

Last Update
07 December 2010


Rating: Not yet rated


Author(s)

IBM Form Number
TIPS0700