IBM Flex System IB6132 2-port QDR InfiniBand Adapter

IBM Redbooks Product Guide


Abstract

The IBM Flex System™ IB6132 2-port QDR InfiniBand Adapter delivers low-latency and high-bandwidth for applications in enterprise data centers, high-performance computing (HPC), and embedded environments. This adapter is designed to operate at InfiniBand QDR speeds (40 Gbps or 10 Gbps per lane). Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications potentially achieve significant performance improvements, which helps reduce the completion time and lowers the cost per operation. The IBM Flex System IB6132 2-port QDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O. The adapter is based on Mellanox ConnectX-2 EN technology, which improves network performance by increasing available bandwidth to the CPU, especially in virtualized server environments.

Changes in the October 21 update:
* Added the x240 M5 compute node to Table 2

Contents


The IBM Flex System™ IB6132 2-port QDR InfiniBand Adapter delivers low-latency and high-bandwidth for applications in enterprise data centers, high-performance computing (HPC), and embedded environments. This adapter is designed to operate at InfiniBand QDR speeds (40 Gbps or 10 Gbps per lane). Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications potentially achieve significant performance improvements, which helps reduce the completion time and lowers the cost per operation.

The adapter is based on Mellanox ConnectX-2 EN technology, which improves network performance by increasing available bandwidth to the CPU, especially in virtualized server environments.

Figure 1 shows the IBM Flex System IB6132 2-port QDR InfiniBand Adapter.
IB6132 2-port QDR InfiniBand Adapter
Figure 1. IBM Flex System IB6132 2-port QDR InfiniBand Adapter


Did you know?

This adapter is designed for low latency, high bandwidth, and computing efficiency for server and storage clustering applications. Combined with the IB6131 InfiniBand Switch, your organization can achieve efficient computing by off-loading from the CPU protocol processing and data movement overhead, such as RDMA and Send/Receive semantics, allowing more processor power for the application.


Part number information

Table 1 shows the part number to order this card.

Table 1. Part number and feature code for ordering
DescriptionPart numberFeature code
(x-config)
Feature code
(e-config)
IBM Flex System IB6132 2-port QDR InfiniBand AdapterNone*None*1761
* This adapter is only available through the Power Systems® sales channel. It is not available through the System x® sales channel.


Features

The IBM Flex System IB6132 2-port QDR InfiniBand Adapter with its ConnectX-2 controller has the features discussed in the following sections.

InfiniBand

ConnectX-2 delivers low latency, high bandwidth, and computing efficiency for server and storage clustering applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. Graphical processing unit (GPU) communication acceleration provides additional efficiencies by eliminating unnecessary internal data copies, which significantly reduces application run time. The ConnectX-2 acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.

RDMA over converged Ethernet

ConnectX-2 utilizes the InfiniBand Trade Association's RDMA over Converged Ethernet (RoCE) technology to deliver similar low-latency and high-performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low-latency RDMA services over Layer 2 Ethernet. The RoCE software stack maintains existing and future compatibility with bandwidth and latency-sensitive applications. With link-level interoperability in the existing Ethernet infrastructure, network administrators can use existing data center fabric management solutions.

TCP/UDP/IP acceleration

Applications utilizing TCP/UDP/IP transport can achieve industry leading throughput over InfiniBand or 10 GbE adapters. The hardware-based stateless offload engines in ConnectX-2 reduce the CPU impact of IP packet transport, allowing more processor cycles to work on the application.

I/O virtualization

ConnectX-2 with Virtual Intelligent Queuing (Virtual-IQ) technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines within the server. I/O virtualization with ConnectX-2 gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Storage accelerated

A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can use InfiniBand RDMA for high-performance storage access. T11-compliant encapsulation (FCoIB or FCoE) with full hardware offload simplifies the storage network while keeping existing Fibre Channel targets.


Specifications

The IBM Flex System IB6132 2-port QDR InfiniBand Adapter has the following features and specifications:
  • Based on Mellanox ConnectX-2 EN technology
  • Virtual Protocol Interconnect (VPI)
  • InfiniBand Architecture Specification v1.2.1 compliant
  • Supported InfiniBand speeds (auto-negotiated):
    • 1X/2X/4X SDR (2.5Gbps per lane)
    • DDR (5Gb/s per lane)
    • QDR (10Gb/s per lane)
  • IEEE Std. 802.3 compliant
  • PCI Express 2.0 x8 host interface up to 5GT/s bandwidth
  • CPU off-load of transport operations
  • CORE-Direct application off-load
  • GPUDirect application off-load
  • RDMA over Converged Ethernet (RoCE)
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • TCP/UDP/IP stateless off-load
  • Ethernet encapsulation (EoIB)
  • RoHS-6 compliant
  • Mellanox OFED software package support for SUSE Linux and Red Hat Linux
  • Power consumption: Typical: 15.6 W, maximum 17.9 W


Supported servers

The following table lists the IBM Flex System compute nodes that support the IB6132 2-port QDR InfiniBand Adapter.

Table 2. Supported servers
DescriptionFeature code
x220 (7906)
x222 (7916)
x240 (8737, E5-2600)
x240 (8737, E5-2600 v2)
x240 M5 (9532)
x440 (7917)
x280 / x480 / x880 X6 (7903)
p24L (1457)
p260 (7895)
p270 (7954)
p460 (7895)
IBM Flex System IB6132 2-port QDR InfiniBand Adapter1761
N
N
N
N
N
N
N
Y
Y
Y
Y

I/O adapter cards are installed in the slot in supported servers, such as the p260, as highlighted in the following figure.

Location of the I/O adapter slots in the p260 Compute Node
Figure 2. Location of the I/O adapter slots in the IBM Flex System p260 Compute Node (The IB6132 2-port QDR InfiniBand Adapter is installed in slot 2.)


Supported I/O modules

The IB6132 2-port QDR InfiniBand Adapter supports the I/O module listed in the following table. One or two compatible switches must be installed in the corresponding I/O bays in the chassis. Installing two switches means that both ports of the adapter are enabled.

Table 3. I/O modules supported with the IB6132 2-port QDR InfiniBand Adapter
DescriptionPower Systems feature code
IBM Flex System IB6131 InfiniBand Switch3699

The following table shows the connections between adapters installed in the compute nodes to the switch bays in the chassis.

Table 4. Adapter to I/O bay correspondence
I/O adapter slot in the serverPort on the adapterCorresponding I/O module bay in the chassis
Slot 1*Port 1Module bay 1
Port 2Module bay 2
Slot 2Port 1Module bay 3
Port 2Module bay 4
* Slot 1 of the p260 and p460 always contains a 10 Gb Ethernet adapter.

The connections between the adapters installed in the compute nodes to the switch bays in the chassis are shown diagrammatically in the following figure.

Logical layout of the interconnects between I/O adapters and I/O modules
Figure 3. Logical layout of the interconnects between I/O adapters and I/O modules


Supported operating systems

The IB6132 2-port QDR InfiniBand Adapter supports the following operating systems:
  • AIX Version 7.1
  • Red Hat Enterprise Linux 5 for IBM POWER
  • Red Hat Enterprise Linux 6 for IBM POWER
  • SUSE LINUX Enterprise Server 11 for IBM POWER


Regulatory compliance

The adapter conforms to the following standards:
  • United States FCC 47 CFR Part 15, Subpart B, ANSI C63.4 (2003), Class A
  • United States UL 60950-1, Second Edition
  • IEC/EN 60950-1, Second Edition
  • FCC - Verified to comply with Part 15 of the FCC Rules, Class A
  • Canada ICES-003, issue 4, Class A
  • UL/IEC 60950-1
  • CSA C22.2 No. 60950-1-03
  • Japan VCCI, Class A
  • Australia/New Zealand AS/NZS CISPR 22:2006, Class A
  • IEC 60950-1(CB Certificate and CB Test Report)
  • Taiwan BSMI CNS13438, Class A
  • Korea KN22, Class A; KN24
  • Russia/GOST ME01, IEC-60950-1, GOST R 51318.22-99, GOST R 51318.24-99, GOST R 51317.3.2-2006, GOST R 51317.3.3-99
  • IEC 60950-1 (CB Certificate and CB Test Report)
  • CE Mark (EN55022 Class A, EN60950-1, EN55024, EN61000-3-2, EN61000-3-3)
  • CISPR 22, Class A


Physical specifications

The dimensions and weight of the adapter are as follows:
  • Width: 100 mm (3.9 inches)
  • Depth: 80 mm (3.1 inches)
  • Weight: 13 g (0.3 lb)

Shipping dimensions and weight (approximate):
  • Height: 58 mm (2.3 in)
  • Width: 229 mm (9.0 in)
  • Depth: 208 mm (8.2 in)
  • Weight: 0.4 kg (0.89 lb)


Popular configurations

The IB6132 2-port QDR InfiniBand Adapter is designed to be used with the IB6131 InfiniBand Switch. The following figure shows one IB6132 adapter installed in slot 2 of an p260 Compute Node, which in turn is installed in the chassis. Two IB6131 InfiniBand Switches are installed in I/O bays 3 and 4.

Example configuration
Figure 4. Example configuration

The following table lists the parts that are used in the configuration. The configuration also includes the FDR upgrade to upgrade all external ports to FDR (54 Gbps) speeds.

Table 5. Components used when connecting the IB6132 2-port QDR InfiniBand Adapter to the IB6131 InfiniBand Switches
Machine type / featureDescriptionQuantity
7895-22XIBM Flex System p260 Compute Node1 to 14
7895 feature A1QZIB6132 2-port QDR InfiniBand Adapter 1 per server
7893-92XIBM Flex System Enterprise Chassis1
7893 feature 3699IBM Flex System IB6131 InfiniBand Switch (in bays 3 and 4)1 or 2
7893 feature ESW1IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade)1 or 2


Related publications
For more information, see the following resources:

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
11 April 2012

Last Update
21 October 2014


Rating: Not yet rated


Author(s)

IBM Form Number
TIPS0890