The IBM Flex System™ IB6132 2-port FDR InfiniBand Adapter delivers low latency and high bandwidth for performance-driven server clustering applications in enterprise data centers, high-performance computing (HPC), and embedded environments. The adapter is designed to operate at InfiniBand FDR speeds (56 Gbps or 14 Gbps per lane). Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications potentially achieve significant performance improvements, which helps reduce the completion time and lowers the cost per operation. The IBM Flex System IB6132 2-port FDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments.
Changes in the June 18 update:
* SR-IOV not supported
* Updated supported servers
* Updated supported operating systems
The IBM Flex System™ IB6132 2-port FDR InfiniBand Adapter delivers low latency and high bandwidth for performance-driven server clustering applications in enterprise data centers, high-performance computing (HPC), and embedded environments. The adapter is designed to operate at InfiniBand FDR speeds (56 Gbps or 14 Gbps per lane). Clustered databases, parallelized applications, transactional services, and high-performance embedded I/O applications potentially achieve significant performance improvements, helping to reduce completion time and lower the cost per operation. The IBM Flex System IB6132 2-port FDR InfiniBand Adapter simplifies network deployment by consolidating clustering, communications, and management I/O, and helps provide enhanced performance in virtualized server environments.
Figure 1 shows the IBM Flex System IB6132 2-port FDR InfiniBand Adapter.
Figure 1. IBM Flex System IB6132 2-port FDR InfiniBand Adapter
Did you know?
Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra-low, sub-microsecond latency for performance-driven server clustering applications. Combined with the IB6131 InfiniBand Switch, your organization can achieve efficient computing by off-loading from the CPU protocol processing and data movement overhead, such as Remote Direct Memory Access (RDMA) and Send/Receive semantics, allowing more processor power for the application. Advanced acceleration technology enables more than 90M Message Passing Interface (MPI) messages per second, making it a highly scalable adapter delivering cluster efficiency and scalability to tens-of-thousands of nodes.
Part number information
Table 1 shows the part number to order this card.
Table 1. Part number and feature code for ordering
|Description||Part number||Feature code|
|IBM Flex System IB6132 2-port FDR InfiniBand Adapter||90Y3454||A1QZ||EC2C|
The part number includes the following items:
- One IBM Flex System IB6132 2-port FDR InfiniBand Adapter
- A documentation CD containing the adapter user’s guide
- The IBM® Important Notices document
The IBM Flex System IB6132 2-port FDR InfiniBand Adapter has the following features.
Based on Mellanox ConnectX-3 technology, the IB6132 2-port FDR InfiniBand Adapter provides a high level of throughput performance for all network environments by removing I/O bottlenecks in mainstream servers that are limiting application performance. Servers can achieve up to 56 Gbps transmit and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless off-load engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the host processor.
RDMA over the InfiniBand fabric further accelerates application run time while reducing CPU utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial market firms, as well as other industries where speed of data delivery is paramount to take advantage. With the ConnectX-3-based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor nodes, such as climate research, molecular modeling, and physical simulations, can share data and synchronize faster, resulting in shorter run times. High-frequency transaction applications are able to access trading information more quickly, making sure that the trading servers are able to respond first to any new market data and market inefficiencies, while the higher throughput enables higher volume trading, maximizing liquidity and profitability.
In data mining or web crawl applications, RDMA provides the needed boost in performance to search faster by solving the network latency bottleneck associated with I/O cards and the corresponding transport technology in the cloud. Various other applications that benefit from RDMA with ConnectX-3 include Web 2.0 (Content Delivery Network), business intelligence, database transactions, and various Cloud computing applications. Mellanox ConnectX-3's low power consumption provides clients with high bandwidth and low latency at the lowest cost of ownership.
Quality of service
Resource allocation per application or per VM is provided by the advanced quality of service (QoS) supported by ConnectX-3. Service levels for multiple traffic types can be assigned on a per flow basis, allowing system administrators to prioritize traffic by application, virtual machine, or protocol. This powerful combination of QoS and prioritization provides the ultimate fine-grained control of traffic, ensuring that applications run smoothly in today’s complex environments.
The IBM Flex System IB6132 2-port FDR InfiniBand Adapter has the following specifications:
- Based on Mellanox Connect-X3 technology
- InfiniBand Architecture Specification v1.2.1 compliant
- Supported InfiniBand speeds (auto-negotiated):
- 1X/2X/4X Single Data Rate (SDR) (2.5 Gb/s per lane)
- Double Data Rate (DDR) (5 Gb/s per lane)
- Quad Data Rate (QDR) (10 Gb/s per lane)
- FDR10 (40 Gb/s, 10 Gb/s per lane)
- Fourteen Data Rate (FDR) (56 Gb/s, 14 Gb/s per lane)
- PCI Express 3.0 x8 host-interface up to 8 gigatransfers per second (GT/s) bandwidth
- CPU off-load of transport operations
- CORE-Direct® application off-load
- GPUDirect application off-load
- End-to-end QoS and congestion control
- Hardware-based I/O virtualization
- Transmission Control Protocol (TCP)/User Datagram Protocol (UDP)/Internet Protocol (IP) stateless off-load
- Ethernet encapsulation (EoIB)
- RoHS-6 compliant
- Power consumption: Typical: 9 W, maximum 11 W
Note: To operate at InfiniBand FDR speeds, the IBM Flex System IB6131 InfiniBand Switch requires the FDR Update license, 90Y3462.
The following table lists the IBM Flex System compute nodes that support the IB6132 2-port FDR InfiniBand Adapter.
Table 2. Supported servers
|IBM Flex System IB6132 2-port FDR InfiniBand Adapter||90Y3454|
See IBM ServerProven® at the following web address for the latest information about the expansion cards that are supported by each blade server type:
I/O adapter cards are installed in the slot in supported servers, such as the x240, as highlighted in the following figure.
Figure 3. Location of the I/O adapter slots in the IBM Flex System x240 Compute Node
Supported I/O modules
The IB6132 2-port FDR InfiniBand Adapter supports the I/O module listed in the following table. One or two compatible switches must be installed in the corresponding I/O bays in the chassis. Installing two switches means that both ports of the adapter are enabled. The adapter has a total bandwidth of 56 Gbps. That total bandwidth is shared when two switches are installed. To operate at FDR speeds (56 Gbps), you must also install the FDR Upgrade license, 90Y3462.
Table 3. I/O modules supported with the IB6132 2-port FDR InfiniBand Adapter
|IBM Flex System IB6131 InfiniBand Switch||90Y3450|
|IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade)*||90Y3462|
The following table shows the connections between adapters installed in the compute nodes to the switch bays in the chassis.
Table 4. Adapter to I/O bay correspondence
|I/O adapter slot in the server||Port on the adapter||Corresponding I/O module bay in the chassis|
|Slot 1||Port 1||Module bay 1|
|Port 2||Module bay 2|
|Slot 2||Port 1||Module bay 3|
|Port 2||Module bay 4|
The connections between the adapters installed in the compute nodes to the switch bays in the chassis are shown diagrammatically in the following figure.
Figure 3. Logical layout of the interconnects between I/O adapters and I/O modules
Supported operating systems
The IB6132 2-port FDR InfiniBand Adapter supports the following 64-bit operating systems:
- Microsoft Windows Server 2008 R2
- Microsoft Windows Server 2008, Datacenter x64 Edition
- Microsoft Windows Server 2008, Enterprise x64 Edition
- Microsoft Windows Server 2008, Standard x64 Edition
- Microsoft Windows Server 2008, Web x64 Edition
- Microsoft Windows Server 2012
- Microsoft Windows Server 2012 R2
- Red Hat Enterprise Linux 5 Server x64 Edition
- Red Hat Enterprise Linux 6 Server x64 Edition
- SUSE LINUX Enterprise Server 10 for AMD64/EM64T
- SUSE LINUX Enterprise Server 11 for AMD64/EM64T
- VMware ESX 4.1
- VMware vSphere 5.0 (ESXi)
The adapter conforms to the following standards:
- United States FCC 47 CFR Part 15, Subpart B, ANSI C63.4 (2003), Class A
- United States UL 60950-1, Second Edition
- IEC/EN 60950-1, Second Edition
- FCC - Verified to comply with Part 15 of the FCC Rules, Class A
- Canada ICES-003, issue 4, Class A
- UL/IEC 60950-1
- CSA C22.2 No. 60950-1-03
- Japan VCCI, Class A
- Australia/New Zealand AS/NZS CISPR 22:2006, Class A
- IEC 60950-1(CB Certificate and CB Test Report)
- Taiwan BSMI CNS13438, Class A
- Korea KN22, Class A; KN24
- Russia/GOST ME01, IEC-60950-1, GOST R 51318.22-99, GOST R 51318.24-99, GOST R 51317.3.2-2006, GOST R 51317.3.3-99
- IEC 60950-1 (CB Certificate and CB Test Report)
- CE Mark (EN55022 Class A, EN60950-1, EN55024, EN61000-3-2, and EN61000-3-3)
- CISPR 22, Class A
The dimensions and weight of the adapter are as follows:
- Width: 100 mm (3.9 inches)
- Depth: 80 mm (3.1 inches)
- Weight: 13 g (0.3 lb)
Shipping dimensions and weight (approximate):
- Height: 58 mm (2.3 in)
- Width: 229 mm (9.0 in)
- Depth: 208 mm (8.2 in)
- Weight: 0.4 kg (0.89 lb)
The IB6132 2-port FDR InfiniBand Adapter is designed to be used with the IB6131 InfiniBand Switch. The following figure shows one IB6132 adapter installed in slot 2 of an x240 Compute Node, which in turn is installed in the chassis. Two IB6131 InfiniBand Switches are installed in I/O bays 3 and 4.
Figure 4. Example configuration
The following table lists the parts that are used in the configuration. This configuration includes the FDR upgrade license for the IB6131 switch, as well as the FDR cables.
Table 5. Components used when connecting the IB6132 2-port FDR InfiniBand Adapter to the IB6131 InfiniBand Switches
|Part number/machine type||Description||Quantity|
|8737||IBM Flex System x240 Compute Node||1 to 14|
|90Y3454||IB6132 2-port FDR InfiniBand Adapter||1 per server|
|8721-A1x||IBM Flex System Enterprise Chassis||1|
|90Y3450||IBM Flex System IB6131 InfiniBand Switch (in bays 3 and 4)||1 or 2|
|90Y3462||IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade)||1 or 2|
|90Y3470||3m FDR InfiniBand Cable||Up to 36 (18 per switch)|
For more information, see the following resources:
- IBM Flex System IB6131 InfiniBand Switch Product Guide
- IBM Flex System x240 Compute Node Product Guide
- IBM Flex System Information Center (User's Guides for servers and options)
- IBM Flex System Interoperability Guide
- IBM Redbooks® publication IBM Flex System Products and Technology, SG24-7984
- IBM Redbooks Product Guides for IBM Flex System servers and options
- IBM Configurator for e-business (e-config)
- IBM System x and Cluster Solutions configurator (x-config)
- IBM System x Configuration and Options Guide:
- ServerProven for IBM Flex System
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com