Mellanox ConnectX-3-based InfiniBand and Ethernet Adapters for IBM System x

IBM Redbooks Product Guide

Abstract

High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 FDR VPI InfiniBand/Ethernet and 10 Gb Ethernet adapters for IBM® System x® deliver the I/O performance that meets these requirements.

Mellanox's ConnectX-3 ASIC delivers low latency, high bandwidth, and computing efficiency for performance-driven server applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. Mellanox's ConnectX-3 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.

Changes in the February 21 update:
* SR-IOV support with Linux KVM
* Updated supported servers
* Updated supported operating systems

Contents


Table of contents

Introduction

High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 FDR VPI InfiniBand/Ethernet and 10 Gb Ethernet adapters for IBM® System x® deliver the I/O performance that meets these requirements.

Mellanox's ConnectX-3 ASIC delivers low latency, high bandwidth, and computing efficiency for performance-driven server applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. RDMA support extends to virtual servers when SR-IOV is enabled. Mellanox's ConnectX-3 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.

KVM based virtualized environments can take advantage of quality of service enhancements offered by enabling up to 16 virtual PCI functions with SR-IOV support.

Figure 1. Mellanox Connect X-3 10GbE Adapter for IBM System x (3U bracket shown)

Did you know?

The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter supports Virtual Protocol Interconnect, which enables you to configure the external posts of the adapter to your networking environment. Each port is capable of sensing the network topology the port is connected to. Supported networks are:

  • InfiniBand: Up to 4x FDR-14. FDR-10, QDR, and DDR are also supported
  • Ethernet: Up to 40Gb. A single port 10Gb is also supported with an appropriate QSFP-to-SFP cable.

Part number information

Table 1 shows the part numbers and feature codes for the adapters.

Table 1. Ordering part numbers and feature codes

Part numberFeature codeDescription
00D9690A3PMMellanox Connect X-3 10GbE Adapter for IBM System x
00D9550A3PNMellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x

The part numbers include the following:
  • One adapter with a 2U bracket attached
  • 3U bracket included in the box
  • Quick installation guide
  • Documentation CD
  • Warranty Flyer
  • IBM Important Notices Flyer

The 10GbE Adapter (00D9690) supports the direct-attach copper (DAC) twin-ax cables, transceivers, and optical cables that are listed in the following table.

Table 2. Supported transceivers and DAC cables - Mellanox Connect X-3 10GbE Adapter
Part numberFeature codeDescription
Direct Attach Copper (DAC) cables
00D6288A3RG.5 m IBM Passive DAC SFP+ Cable
90Y9427A1PH1 m IBM Passive DAC SFP+ Cable
90Y9430A1PJ3 m IBM Passive DAC SFP+ Cable
90Y9433A1PK5 m IBM Passive DAC SFP+ Cable
00D6151A3RH7 m IBM Passive DAC SFP+ Cable
95Y0323A25AIBM 1M IBM Active DAC SFP+ Cable
95Y0326A25BIBM 3m IBM Active DAC SFP+ Cable
95Y0329A25CIBM 5m IBM Active DAC SFP+ Cable
SFP+ Transceivers
46C34475053IBM SFP+ SR Transceiver
45W2411SFP+ Transceiver 10 Gbps SR (IB-000180)
49Y42160069Brocade 10Gb SFP+ SR Optical Transceiver
49Y42180064QLogic 10Gb SFP+ SR Optical Transceiver
Optical Cables
88Y6851A1DS1 m LC-LC Fiber Cable (networking) - Optical
88Y6854A1DT5 m LC-LC Fiber Cable (networking)- Optical
88Y6857A1DU25 m LC-LC Fiber Cable (networking) - Optical

The FDR VPI IB/E Adapter (00D9550) supports the direct-attach copper (DAC) twin-ax cables, transceivers, and optical cables that are listed in the following table.

Table 3. Supported transceivers and DAC cables - Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter
Part numberFeature codeDescription
Direct attach copper (DAC) cables - InfiniBand
MC2207130-001None1 m Mellanox QSFP Passive Copper FDR14 InfiniBand Cable
MC2207128-003None3 m Mellanox QSFP Passive Copper FDR14 InfiniBand Cable
Optical cables - InfiniBand
MC2207310-003None3 m Mellanox QSFP Optical FDR14 InfiniBand Cable
MC2207310-005None5 m Mellanox QSFP Optical FDR14 InfiniBand Cable
MC2207310-010None10 m Mellanox QSFP Optical FDR14 InfiniBand Cable
MC2207310-015None15 m Mellanox QSFP Optical FDR14 InfiniBand Cable
MC2207310-020None20 m Mellanox QSFP Optical FDR14 InfiniBand Cable
MC2207310-030None30 m Mellanox QSFP Optical FDR14 InfiniBand Cable
10Gb Ethernet (SFP+)
MC2309130-003None3 m QSFP+ to SFP+ passive copper cables
40Gb Ethernet (QSFP) - 40GbE copper uses the QSFP+ to QSFP+ cables directly.
74Y6074 / 49Y7890A1DP1 m IBM QSFP+ to QSFP+ Cable
74Y6075 / 49Y7891A1DQ3 m IBM QSFP+ to QSFP+ Cable
00D5810A2X85m IBM QSFP-to-QSFP cable
00D5813A2X97m IBM QSFP-to-QSFP cable
40Gb Ethernet (QSFP) - 40GbE optical uses QSFP+ transceiver with MTP optical cables
49Y7884A1DRIBM QSFP+ 40GBASE-SR4 Transceiver
90Y3519 A1MM10 m IBM QSFP+ MTP Optical cable
90Y3521A1MN30 m IBM QSFP+ MTP Optical cable

Note: InfiniBand cables and the 10 Gb Ethernet cable can also be sourced from Mellanox.

Figure 2 shows the Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter (the shipping adapter includes a heatsink over the ASIC but the figure does not show this heatsink)


Figure 2. Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x with 3U bracket (required ASIC heatsink not shown)

Features

The Mellanox Connect X-3 10GbE Adapter has the following features:

  • Two 10 Gigabit Ethernet ports
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • SR-IOV (16 Virtual Functions) supported by KVM
  • Enables Low Latency RDMA over Ethernet (supported with both non-virtualized and SR-IOV enabled virtualized servers)
  • TCP/UDP/IP stateless offload in hardware
  • Traffic steering across multiple cores
  • Intelligent interrupt coalescence
  • Industry-leading throughput and latency performance
  • Software compatible with standard TCP/UDP/IP stacks
  • Legacy and UEFI PXE network boot support

The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter has the following features:
  • Dual QSFP ports supporting FDR-14 InfiniBand or 40 Gb Ethernet
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • Support for InfiniBand FDR speeds of up to 56 Gbps (auto-negotiation FDR-10, DDR and SDR)
  • Support for Virtual Protocol Interconnect (VPI), which enables one adapter for both InfiniBand and 10/40 Gb Ethernet. Supports three configurations:
    • 2 ports InfiniBand
    • 2 ports Ethernet
    • 1 port InfiniBand and 1 port Ethernet
  • SR-IOV (16 Virtual Functions) supported by KVM (Ethernet mode only)
  • Enables Low Latency RDMA over 40Gb Ethernet (supported with both non-virtualized and SR-IOV enabled virtualed servers)
  • High performance/low-latency networking
  • Sub 1 µs InfiniBand MPI ping latency
  • Support for QSFP to SFP+ for 10 GbE support
  • Traffic steering across multiple cores
  • Legacy and UEFI PXE network boot support (Ethernet mode only)

Performance

Based on Mellanox's ConnectX-3 technology, these adapters provide a high level of throughput performance for all network environments by removing I/O bottlenecks in mainstream servers that are limiting application performance. With the FDR VPI IB/E Adapter, servers can achieve up to 56 Gb transmit and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless off-load engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the host processor.

RDMA over InfiniBand and RDMA over Ethernet further accelerate application run time while reducing CPU utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial market firms, as well as other industries where speed of data delivery is paramount. With the ConnectX-3-based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor nodes, such as climate research, molecular modeling, and physical simulations, can share data and synchronize faster, resulting in shorter run times.

In data mining or web crawl applications, RDMA provides the needed boost in performance to enable faster search by solving the network latency bottleneck that is associated with I/O cards and the corresponding transport technology in the cloud. Various other applications that benefit from RDMA with ConnectX-3 include Web 2.0 (Content Delivery Network), business intelligence, database transactions, and various cloud computing applications. Mellanox ConnectX-3's low power consumption provides clients with high bandwidth and low latency at the lowest cost of ownership.

TCP/UDP/IP acceleration

Applications utilizing TCP/UDP/IP transport can achieve industry leading data throughput. The hardware-based stateless offload engines in ConnectX-3 reduce the CPU impact of IP packet transport, allowing more processor cycles to work on the application.

Software support

All Mellanox adapter cards are supported by a full suite of drivers for Microsoft Windows, Linux distributions, and VMware. ConnectX-3 adapters support OpenFabrics-based RDMA protocols and software. Stateless offload is fully interoperable with standard TCP/ UDP/IP stacks. ConnectX-3 adapters are compatible with configuration and management tools from OEMs and operating system vendors.

Specifications

InfiniBand specifications (ConnectX-3 FDR VPI IB/E Adapter):

  • Supports InfiniBand FDR-14, FDR-10, QDR, DDR, and SDR
  • IBTA Specification 1.2.1 compliant
  • RDMA, Send/Receive semantics
  • Hardware-based congestion control
  • 16 million I/O channels
  • 256 to 4 KB MTU, 1 GB messages
  • Nine virtual lanes: Eight data and one management

Enhanced InfiniBand specifications:
  • Hardware-based reliable transport
  • Hardware-based reliable multicast
  • Extended Reliable Connected transport
  • Enhanced Atomic operations
  • Fine-grained end-to-end quality of server (QoS)

Ethernet specifications:
  • IEEE 802.3ae 10 Gigabit Ethernet
  • IEEE 802.3ba 40 Gigabit Ethernet (ConnectX-3 FDR VPI IB/E Adapter)
  • IEEE 802.3ad Link Aggregation and Failover
  • IEEE 802.1Q, 1p VLAN tags and priority
  • IEEE P802.1au D2.0 Congestion Notification
  • IEEE P802.1az D0.2 Enhanced Transmission Selection (ETS)
  • IEEE P802.1bb D1.0 Priority-based Flow Control
  • IEEE 1588 Precision Clock Synchronization
  • Multicast
  • Jumbo frame support
  • 128 MAC/VLAN addresses per port

Hardware-based I/O virtualization:
  • Address translation and protection
  • Multiple queues per virtual machine
  • VMware NetQueue support
  • 16 virtual function SR-IOV supported with Linux KVM

Additional CPU offloads:
  • TCP/UDP/IP stateless offload
  • Intelligent interrupt coalescence
  • Compliant with Microsoft RSS and NetDMA

Management and tools:

InfiniBand:
  • Interoperable with OpenSM and other third-party subnet managers
  • Firmware and debug tools (MFT and IBDIAG)
Ethernet:
  • MIB, MIB-II, MIB-II Extensions, RMON, and RMON 2
  • Configuration and diagnostic tools

Protocol support:
  • Open MPI, OSU MVAPICH, Intel MPI, MS MPI, and Platform MPI
  • TCP/UDP, EoIB and IPoIB
  • uDAPL

Physical specifications

The adapters have the following dimensions:

  • Height: 168 mm (6.60 in)
  • Width: 69 mm (2.71 in)
  • Depth: 17 mm (0.69 in)
  • Weight: 208 g (0.46 lb)

Approximate shipping dimensions:
  • Height: 189 mm (7.51 in)
  • Width: 90 mm (3.54 in)
  • Depth: 38 mm (1.50 in)
  • Weight: 450 g (0.99 lb)

Regulatory approvals

  • EN55022
  • EN55024
  • EN60950-1
  • EN 61000-3-2
  • EN 61000-3-3
  • IEC 60950-1
  • FCC Part 15 Class A
  • UL 60950-1
  • CSA C22.2 60950-1-07
  • VCCI
  • NZ AS3548 / C-tick
  • RRL for MIC (KCC)
  • BSMI (EMC)
  • IECS-003:2004 Issue 4

Operating environment

The adapters are supported in the following environment:

Operating temperature:

  • 0 - 55° C (-32 to 131° F) at 0 - 914 m (0 - 3,000 ft)
  • 10 - 32° C (50 - 90° F) at 914 to 2133 m (3,000 - 7,000 ft)

Relative humidity: 20% - 80% (noncondensing)
Maximum altitude: 2,133 m (7,000 ft)
Air flow: 200 LFM at 55° C

Power consumption:
  • Power consumption (typical): 8.8 W typical (both ports active)
  • Power consumption (maximum): 9.4 W maximum for passive cables only,
  • 13.4 W maximum for active optical modules

Warranty

One year limited warranty. When installed in an IBM System x server, these cards assume your system’s base warranty and any IBM ServicePac® upgrades.

Supported servers

The adapters are supported in the System x servers that are listed in the following table.

Table 4. Server compatibility (Part 1) - X6 and M4 servers with Xeon v2 processors

X6 and M4 servers with Xeon v2 processors
Part
number
Description
x3250 M5 (5458)
x3500 M4 (7383, E5-2600 v2)
x3550 M4 (7914, E5-2600 v2)
x3650 M4 (7915, E5-2600 v2)
x3650 M4 HD (5460)
x3850 X6/x3950 X6 (3837)
dx360 M4 (7912, E5-2600 v2)
nx360 M4 (5455)
00D9690Mellanox Connect X-3 10GbE Adapter
Y
N
Y
Y
Y
Y
Y
Y
00D9550Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter
N
N
Y
Y
Y
Y
Y
Y

Table 4. Server compatibility (Part 2) - X5 and M4 servers with Xeon v1 processors
X5 and M4 servers with Xeon v1 processors
Part
number
Description
x3100 M4 (2582)
x3250 M4 (2583)
x3300 M4 (7382)
x3500 M4 (7383, E5-2600)
x3530 M4 (7160)
x3550 M4 (7914, E5-2600)
x3630 M4 (7158)
x3650 M4 (7915, E5-2600)
x3690 X5 (7147)
x3750 M4 (8722)
x3850 X5 (7143)
dx360 M4 (7912, E5-2600)
00D9690Mellanox Connect X-3 10GbE Adapter
N
N
Y
N
Y
Y
Y
Y
Y
Y
Y
Y
00D9550Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter
N
N
Y
N
Y
Y
Y
Y
Y
Y
Y
Y

Table 4. Server compatibility (Part 3) - M3 servers
M3 servers
Part
number
Product description
x3200 M3 (7327, 7328)
x3250 M3 (4251, 4252)
x3400 M3 (7378, 7379)
x3500 M3 (7380)
x3550 M3 (7944)
x3620 M3 (7376)
x3630 M3 (7377)
x3650 M3 (7945)
x3755 M3 (7164)
00D9690Mellanox Connect X-3 10GbE Adapter
N
N
N
N
Y
N
N
Y
N
N
00D9550Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter
N
N
N
N
Y
N
N
Y
N
N

Memory requirements: Ensure that your server has sufficient memory available to the adapters. The first adapter requires 8 GB of RAM in addition to the memory allocated to operating system, applications, and virtual machines. Any additional adapters installed each require 4 GB of RAM.

Supported operating systems

The adapters support the following operating systems. Check IBM Fix Central for currently supported OS kernels:
http://ibm.com/support/fixcentral

  • SUSE LINUX Enterprise Server (SR-IOV support limited to SLES 11 SP2+)
  • Red Hat Enterprise Linux Server (SR-IOV support limited to RHEL6u3+)
  • VMware vSphere 5
  • Microsoft Windows Server 2008 (limited support)
  • Microsoft Windows Server 2012 R2

Note on VMware Support: VMware is supported on Mellanox Connect X-3 10 GbE Adapter for IBM System x (00D9690), and on Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x (00D9550) only in the Ethernet mode.

Related publications

For more information, refer to these documents:


Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.

Profile

Publish Date
21 February 2013

Last Update
21 February 2014


Rating: Not yet rated


Author(s)

IBM Form Number
TIPS0897