Mellanox ConnectX-3 and ConnectX-3 Pro Adapters for IBM System x

IBM Redbooks Product Guide

Lenovo is open and ready for business with System x
IBM x86 products are now products of Lenovo in the U.S. and other countries. IBM will continue to host x86-related content on ibm.com until migrated to Lenovo.
Learn more about the acquisition

Abstract

High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 and ConnectX-3 Pro network adapters for IBM® System x® servers deliver the I/O performance that meets these requirements.

Mellanox's ConnectX-3 and ConnectX-3 Pro ASIC delivers low latency, high bandwidth, and computing efficiency for performance-driven server applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. RDMA support extends to virtual servers when SR-IOV is enabled. Mellanox's ConnectX-3 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.

Changes in the September 8 update:
* New Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter
* Updated server table, Table 4

Contents


Table of contents

Introduction

High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to get the highest server efficiency and application productivity. The Mellanox ConnectX-3 and ConnectX-3 Pro network adapters for IBM® System x® servers deliver the I/O performance that meets these requirements.

Mellanox's ConnectX-3 and ConnectX-3 Pro ASIC delivers low latency, high bandwidth, and computing efficiency for performance-driven server applications. Efficient computing is achieved by offloading from the CPU routine activities, which makes more processor power available for the application. Network protocol processing and data movement impacts, such as InfiniBand RDMA and Send/Receive semantics, are completed in the adapter without CPU intervention. RDMA support extends to virtual servers when SR-IOV is enabled. Mellanox's ConnectX-3 advanced acceleration technology enables higher cluster efficiency and scalability of up to tens of thousands of nodes.

Figure 1. Mellanox Connect X-3 10GbE Adapter for IBM System x (3U bracket shown)

Did you know?

Mellanox Ethernet and InfiniBand network server adapters provide a high-performing interconnect solution for enterprise data centers, Web 2.0, cloud computing, and HPC environments, where low latency and interconnect efficiency is paramount. In additiona, Virtual Protocol Interconnect (VPI) offers flexibility in InfiniBand and Ethernet port designations.

With the new ConnectX-3 Pro adapter, you can implement VXLAN and NVGRE offload engines to accelerate virtual LAN ID processing, ideal for public and private cloud configurations.

Part number information

Table 1 shows the part numbers and feature codes for the adapters.

Table 1. Ordering part numbers and feature codes

Part numberFeature codeDescription
00D9690A3PMMellanox Connect X-3 10GbE Adapter for IBM System x
00D9550A3PNMellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x
00FP650A5RKMellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter for IBM System x

The part numbers include the following:
  • One adapter with a bracket attached
  • Additional bracket included in the box
  • Quick installation guide
  • Documentation CD
  • Warranty Flyer
  • IBM Important Notices Flyer

Figure 2 shows the Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter (the shipping adapter includes a heatsink over the ASIC but the figure does not show this heatsink)


Figure 2. Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x with 3U bracket (required ASIC heatsink not shown)


Supported cables and transceivers

The 10GbE Adapter (00D9690) and ML2 Adapter (00FP650) support the direct-attach copper (DAC) twin-ax cables, transceivers, and optical cables that are listed in the following table.

Table 2. Supported transceivers and DAC cables - 10GbE Adapter and ML2 Adapter
Part numberFeature codeDescription
Direct Attach Copper (DAC) cables
00D6288A3RG.5 m IBM Passive DAC SFP+ Cable
90Y9427A1PH1 m IBM Passive DAC SFP+ Cable
90Y9430A1PJ3 m IBM Passive DAC SFP+ Cable
90Y9433A1PK5 m IBM Passive DAC SFP+ Cable
00D6151A3RH7 m IBM Passive DAC SFP+ Cable
95Y0323A25AIBM 1M IBM Active DAC SFP+ Cable
95Y0326A25BIBM 3m IBM Active DAC SFP+ Cable
95Y0329A25CIBM 5m IBM Active DAC SFP+ Cable
SFP+ Transceivers
46C34475053IBM SFP+ SR Transceiver
45W2411SFP+ Transceiver 10 Gbps SR (IB-000180)
49Y42160069Brocade 10Gb SFP+ SR Optical Transceiver
49Y42180064QLogic 10Gb SFP+ SR Optical Transceiver
Optical Cables
88Y6851A1DS1 m LC-LC Fiber Cable (networking) - Optical
88Y6854A1DT5 m LC-LC Fiber Cable (networking)- Optical
88Y6857A1DU25 m LC-LC Fiber Cable (networking) - Optical

The FDR VPI IB/E Adapter (00D9550) supports the direct-attach copper (DAC) twin-ax cables, transceivers, and optical cables that are listed in the following table.

Table 3. Supported transceivers and DAC cables - Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter
Part numberFeature codeDescription
Direct attach copper (DAC) cables - InfiniBand
44T1364ARZA0.5m Mellanox QSFP Passive DAC Cable for IBM System x
00KF002ARZB0.75m Mellanox QSFP Passive DAC Cable for IBM System x
00KF003ARZC1m Mellanox QSFP Passive DAC Cable for IBM System x
00KF004ARZD1.25m Mellanox QSFP Passive DAC Cable for IBM System x
00KF005ARZE1.5m Mellanox QSFP Passive DAC Cable for IBM System x
00KF006ARZF3m Mellanox QSFP Passive DAC Cable for IBM System x
Optical cables - InfiniBand
00KF007ARYC3m Mellanox Active IB FDR Optical Fiber Cable for IBM System x
00KF008ARYD5m Mellanox Active IB FDR Optical Fiber Cable for IBM System x
00KF009ARYE10m Mellanox Active IB FDR Optical Fiber Cable for IBM System x
00KF010ARYF15m Mellanox Active IB FDR Optical Fiber Cable for IBM System x
00KF011ARYG20m Mellanox Active IB FDR Optical Fiber Cable for IBM System x
00KF012ARYH30m Mellanox Active IB FDR Optical Fiber Cable for IBM System x
40Gb Ethernet (QSFP) to 10Gb Ethernet (SFP+) Conversion
00KF013ARZG3m Mellanox QSFP Passive DAC Hybrid Cable for IBM System x
00D9676ARZHMellanox QSFP to SFP+ adapter for IBM System x
40Gb Ethernet (QSFP) - 40GbE copper uses the QSFP+ to QSFP+ cables directly.
74Y6074 / 49Y7890A1DP1 m IBM QSFP+ to QSFP+ Cable
74Y6075 / 49Y7891A1DQ3 m IBM QSFP+ to QSFP+ Cable
00D5810A2X85m IBM QSFP-to-QSFP cable
00D5813A2X97m IBM QSFP-to-QSFP cable
40Gb Ethernet (QSFP) - 40GbE optical uses QSFP+ transceiver with MTP optical cables
49Y7884A1DRIBM QSFP+ 40GBASE-SR4 Transceiver
90Y3519 A1MM10 m IBM QSFP+ MTP Optical cable
90Y3521A1MN30 m IBM QSFP+ MTP Optical cable

The following figure shows the Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter.

Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter
Figure 3. Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter

Features

The Mellanox Connect X-3 10GbE Adapter has the following features:

  • Two 10 Gigabit Ethernet ports
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • SR-IOV support; 16 virtual functions supported by KVM and Hyper-V (OS dependant) up to a maximum of 127 virtual functions supported by the adapter
  • Enables Low Latency RDMA over Ethernet (supported with both non-virtualized and SR-IOV enabled virtualized servers) -- latency as low as 1 μs
  • TCP/UDP/IP stateless offload in hardware
  • Traffic steering across multiple cores
  • Intelligent interrupt coalescence
  • Industry-leading throughput and latency performance
  • Software compatible with standard TCP/UDP/IP stacks
  • Microsoft VMQ / VMware NetQueue support
  • Legacy and UEFI PXE network boot support

The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter has the following features:
  • Two QSFP ports supporting FDR-14 InfiniBand or 40 Gb Ethernet
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • Support for InfiniBand FDR speeds of up to 56 Gbps (auto-negotiation FDR-10, DDR and SDR)
  • Support for Virtual Protocol Interconnect (VPI), which enables one adapter for both InfiniBand and 10/40 Gb Ethernet. Supports three configurations:
    • 2 ports InfiniBand
    • 2 ports Ethernet
    • 1 port InfiniBand and 1 port Ethernet
  • SR-IOV support; 16 virtual functions supported by KVM and Hyper-V (OS dependant) up to a maximum of 127 virtual functions supported by the adapter
  • Enables Low Latency RDMA over 40Gb Ethernet (supported with both non-virtualized and SR-IOV enabled virtualed servers) -- latency as low as 1 μs
  • Microsoft VMQ / VMware NetQueue support
  • Sub 1 µs InfiniBand MPI ping latency
  • Support for QSFP to SFP+ for 10 GbE support
  • Traffic steering across multiple cores
  • Legacy and UEFI PXE network boot support (Ethernet mode only)

The Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter has the same features as the ConnectX-3 40GbE / FDR IB VPI Adapter with these additions:
  • Mezzanine LOM Generation 2 (ML2) form factor
  • Offers NVGRE hardware offloads
  • Offers VXLAN hardware offloads


Performance

Based on Mellanox's ConnectX-3 technology, these adapters provide a high level of throughput performance for all network environments by removing I/O bottlenecks in mainstream servers that are limiting application performance. With the FDR VPI IB/E Adapter, servers can achieve up to 56 Gb transmit and receive bandwidth. Hardware-based InfiniBand transport and IP over InfiniBand (IPoIB) stateless off-load engines handle the segmentation, reassembly, and checksum calculations that otherwise burden the host processor.

RDMA over InfiniBand and RDMA over Ethernet further accelerate application run time while reducing CPU utilization. RDMA allows very high-volume transaction-intensive applications typical of HPC and financial market firms, as well as other industries where speed of data delivery is paramount. With the ConnectX-3-based adapter, highly compute-intensive tasks running on hundreds or thousands of multiprocessor nodes, such as climate research, molecular modeling, and physical simulations, can share data and synchronize faster, resulting in shorter run times.

In data mining or web crawl applications, RDMA provides the needed boost in performance to enable faster search by solving the network latency bottleneck that is associated with I/O cards and the corresponding transport technology in the cloud. Various other applications that benefit from RDMA with ConnectX-3 include Web 2.0 (Content Delivery Network), business intelligence, database transactions, and various cloud computing applications. Mellanox ConnectX-3's low power consumption provides clients with high bandwidth and low latency at the lowest cost of ownership.

TCP/UDP/IP acceleration

Applications utilizing TCP/UDP/IP transport can achieve industry leading data throughput. The hardware-based stateless offload engines in ConnectX-3 reduce the CPU impact of IP packet transport, allowing more processor cycles to work on the application.

NVGRE and VXLAN hardware offloads

The Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter offers NVGRE and VXLAN hardware offload engines which provide additional performance benefits, especially for public or private cloud implementations and virtualized environments. These offloads ensure that Overlay Networks are enabled to handle the advanced mobility, scalability, serviceability that is required in today's and tomorrow's data center. These offloads dramatically lower CPU consumption, thereby reducing cloud application cost, facilitating the highest available throughput, and lowering power consumption.

Software support

All Mellanox adapter cards are supported by a full suite of drivers for Microsoft Windows, Linux distributions, and VMware. ConnectX-3 adapters support OpenFabrics-based RDMA protocols and software. Stateless offload is fully interoperable with standard TCP/ UDP/IP stacks. ConnectX-3 adapters are compatible with configuration and management tools from OEMs and operating system vendors.

Specifications

InfiniBand specifications (ConnectX-3 FDR VPI IB/E Adapter and ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter):

  • Supports InfiniBand FDR-14, FDR-10, QDR, DDR, and SDR
  • IBTA Specification 1.2.1 compliant
  • RDMA, Send/Receive semantics
  • Hardware-based congestion control
  • 16 million I/O channels
  • 256 to 4 KB MTU, 1 GB messages
  • Nine virtual lanes: Eight data and one management
  • NV-GRE hardware offloads (ConnectX-3 Pro only)
  • VXLAN hardware offloads (ConnectX-3 Pro only)

Enhanced InfiniBand specifications:
  • Hardware-based reliable transport
  • Hardware-based reliable multicast
  • Extended Reliable Connected transport
  • Enhanced Atomic operations
  • Fine-grained end-to-end quality of server (QoS)

Ethernet specifications:
  • IEEE 802.3ae 10 GbE
  • IEEE 802.3ba 40 GbE (ConnectX-3 FDR VPI IB/E Adapter and ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter)
  • IEEE 802.3ad Link Aggregation
  • IEEE 802.3az Energy Efficient Ethernet
  • IEEE 802.1Q, .1P VLAN tags and priority
  • IEEE 802.1Qbg
  • IEEE P802.1Qaz D0.2 Enhanced Transmission Selection (ETS)
  • IEEE P802.1Qbb D1.0 Priority-based Flow Control
  • IEEE 1588v2 Precision Clock Synchronization
  • Multicast
  • Jumbo frame support (9600B)
  • 128 MAC/VLAN addresses per port

Hardware-based I/O virtualization:
  • Address translation and protection
  • Multiple queues per virtual machine
  • VMware NetQueue support
  • 16 virtual function SR-IOV supported with Linux KVM
  • VXLAN and NVGRE (ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter)

SR-IOV features:
  • Address translation and protection
  • Dedicated adapter resources
  • Multiple queues per virtual machine
  • Enhanced QoS for vNICs
  • VMware NetQueue support

Additional CPU offloads:
  • TCP/UDP/IP stateless offload
  • Intelligent interrupt coalescence
  • Compliant with Microsoft RSS and NetDMA

Management and tools:

InfiniBand:
  • Interoperable with OpenSM and other third-party subnet managers
  • Firmware and debug tools (MFT and IBDIAG)
Ethernet:
  • MIB, MIB-II, MIB-II Extensions, RMON, and RMON 2
  • Configuration and diagnostic tools

Protocol support:
  • Open MPI, OSU MVAPICH, Intel MPI, MS MPI, and Platform MPI
  • TCP/UDP, EoIB and IPoIB
  • uDAPL

Physical specifications

The adapters have the following dimensions:

  • Height: 168 mm (6.60 in)
  • Width: 69 mm (2.71 in)
  • Depth: 17 mm (0.69 in)
  • Weight: 208 g (0.46 lb)

Approximate shipping dimensions:
  • Height: 189 mm (7.51 in)
  • Width: 90 mm (3.54 in)
  • Depth: 38 mm (1.50 in)
  • Weight: 450 g (0.99 lb)

Regulatory approvals

  • EN55022
  • EN55024
  • EN60950-1
  • EN 61000-3-2
  • EN 61000-3-3
  • IEC 60950-1
  • FCC Part 15 Class A
  • UL 60950-1
  • CSA C22.2 60950-1-07
  • VCCI
  • NZ AS3548 / C-tick
  • RRL for MIC (KCC)
  • BSMI (EMC)
  • IECS-003:2004 Issue 4

Operating environment

The adapters are supported in the following environment:

Operating temperature:

  • 0 - 55° C (-32 to 131° F) at 0 - 914 m (0 - 3,000 ft)
  • 10 - 32° C (50 - 90° F) at 914 to 2133 m (3,000 - 7,000 ft)

Relative humidity: 20% - 80% (noncondensing)
Maximum altitude: 2,133 m (7,000 ft)
Air flow: 200 LFM at 55° C

Power consumption:
  • Power consumption (typical): 8.8 W typical (both ports active)
  • Power consumption (maximum): 9.4 W maximum for passive cables only,
  • 13.4 W maximum for active optical modules

Warranty

One year limited warranty. When installed in an IBM System x server, these cards assume your system’s base warranty and any IBM ServicePac® upgrades.

Supported servers

The adapters are supported in the System x servers that are listed in the following table.

Table 4. Server compatibility

M4 and X5 systems (v1 processors)
M4 and X6 systems (v2 processors)
M5 systems
(v3 processors)
Part
number
Description
x3100 M4 (2582)
x3250 M4 (2583)
x3300 M4 (7382)
x3500 M4 (7383, E5-2600)
x3530 M4 (7160)
x3550 M4 (7914, E5-2600)
x3630 M4 (7158)
x3650 M4 (7915, E5-2600)
x3690 X5 (7147)
x3750 M4 (8722)
x3850 X5 (7143)
dx360 M4 (7912, E5-2600)
x3500 M4 (7383, E5-2600 v2)
x3630 M4 (7158, E5-2400 v2)
x3550 M4 (7914, E5-2600 v2)
x3650 M4 (7915, E5-2600 v2)
x3650 M4 HD (5460)
x3850 X6/x3950 X6 (3837)
dx360 M4 (7912, E5-2600 v2)
nx360 M4 (5455)
x3100 M5 (5457)
x3250 M5 (5458)
x3550 M5 (5463)
x3650 M5 (5462)
nx360 M5 (5465)
00D9690Mellanox ConnectX-3 10GbE Adapter for IBM System x
N
N
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
00D9550Mellanox ConnectX-3 FDR VPI IB/E Adapter for IBM System x
N
N
Y
Y
Y
Y
Y
Y
Y
Y
Y
N
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
N
N
Y
Y
Y
00FP650Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter for IBM System x
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
Y
Y
N
N
N
N
N
N
Y

Memory requirements: Ensure that your server has sufficient memory available to the adapters. The first adapter requires 8 GB of RAM in addition to the memory allocated to operating system, applications, and virtual machines. Any additional adapters installed each require 4 GB of RAM.

Supported operating systems

The adapters support the following operating systems. Check IBM Fix Central for currently supported OS kernels:
http://ibm.com/support/fixcentral

For the Mellanox ConnectX-3 10GbE Adapter and Mellanox ConnectX-3 FDR VPI IB/E Adapter:

  • Microsoft Windows Server 2008 R2
  • Microsoft Windows Server 2012 R2
  • Red Hat Enterprise Linux 5 Server x64 Edition
  • Red Hat Enterprise Linux 6 Server x64 Edition
  • SUSE LINUX Enterprise Server 10 for AMD64/EM64T
  • SUSE LINUX Enterprise Server 11 for AMD64/EM64T
  • VMware vSphere 5.0 (ESXi)
  • VMware vSphere 5.1 (ESXi)
  • VMware vSphere 5.5 (ESXi)

For the Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter:
  • Microsoft Windows Server 2012 R2
  • SUSE LINUX Enterprise Server 11 for AMD64/EM64T, SP3
  • Red Hat Enterprise Linux 6 Server x64 Edition, U5
  • Red Hat Enterprise Linux 7
  • SUSE Enterprise Linux Server (SLES) 12
  • VMware vSphere 5.5 (ESXi), U1
  • VMware vSphere 5.1 (ESXi), U2

Note:
  • VXLAN is initially supported only with Red Hat Enterprise Linux 7
  • NVGRE is initially supported only with Windows Server 2012 R2

Note on VMware support: With VMware, these adapters are supported only in Ethernet mode. InfiniBand is not supported.

Related publications

For more information, refer to these documents:


Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
21 February 2013

Last Update
08 September 2014


Rating: Not yet rated


Author(s)

IBM Form Number
TIPS0897