Cisco Nexus 4001I Switch Module for IBM BladeCenter

IBM Redbooks Product Guide

Lenovo is open and ready for business with System x
IBM x86 products are now products of Lenovo in the U.S. and other countries. IBM will continue to host x86-related content on ibm.com until migrated to Lenovo.
Learn more about the acquisition

Abstract

The Cisco Nexus 4001I Switch Module is a blade switch solution for the BladeCenter H and HT chassis providing the server I/O solution required for high-performance, scale-out, virtualized, and non-virtualized x86 computing architectures. It is a line rate, extremely low-latency, non-blocking, Layer 2, 10 Gigabit Ethernet blade switch that is fully compliant with Fibre Channel over Ethernet (FCoE) and IEEE Data Center Bridging standards.

Contents


The Cisco Nexus 4001I Switch Module is a blade switch solution for the BladeCenter H and HT chassis providing the server I/O solution required for high-performance, scale-out, virtualized, and non-virtualized x86 computing architectures. It is a line rate, extremely low-latency, non-blocking, Layer 2, 10 Gigabit Ethernet blade switch that is fully compliant with Fibre Channel over Ethernet (FCoE) and IEEE Data Center Bridging standards.

The Cisco Nexus 4001I enables a standards-based, high-performance Unified Fabric running over 10 Gigabit Ethernet in the blade server environment. This Unified Fabric enables consolidation of LAN traffic, storage traffic (IP-based such as iSCSI, NAS, and Fibre Channel SAN), and high-performance computing (HPC) traffic over a single 10 Gigabit Ethernet server network.

This offering works with BladeCenter Open Fabric Manager, providing all the benefits of I/O virtualization at 10 Gbps speed. Figure 1 shows the switch module.

Cisco Nexus 4001I Switch Module for IBM BladeCenter
Figure 1. Cisco Nexus 4001I Switch Module for IBM BladeCenter

Did you know

Clients considering Fibre Channel over Ethernet as a fabric consolidation solution can implement a 10 Gigabit Ethernet infrastructure as the basis of the solution. The solution uses FCoE between the converged network adapter (CNA) in each server and a top-of-rack (TOR) switch such as the Cisco Nexus 5000 Series, with the Cisco Nexus 4001I blade switch at the access layer between them.

With this solution, clients gain most of the cost savings from the use of fewer adapters, cables, and switch ports. The FCoE connection between the CNA and the 4001I and 5000 Series switches will carry both Fibre Channel and Ethernet traffic on a single link. The Cisco Nexus 5000 Series switch will then separate LAN and Ethernet traffic to the Cisco Nexus 7000 Series switch upstream and SAN traffic to the Cisco MDS 9000 Family switch upstream.


Part number information

Table 1 shows the part numbers for ordering these modules and additional options for them.


Table 1. IBM part numbers and feature codes for ordering

DescriptionIBM part numberIBM feature codeCisco part number
Cisco Nexus 4001I Switch Module 46M60710072N4K-4001I-XPX
Software Upgrade License for Cisco Nexus 4001I49Y99831744N4K-4001I-SSK9

The module part numbers include the following items:
  • One Cisco Nexus 4001I Switch Module
  • Cisco Console Cable RJ45-to-DB9
  • One filler panel
  • Important Notices document
  • Documentation CD-ROM

Software Upgrade License for Cisco Nexus 4001I

The Cisco Nexus 4001I Switch Module is designed to support both 10 Gb Ethernet and Fibre Channel over Ethernet. Software Upgrade License for Cisco Nexus 4001I, part number 49Y9983, enables the switch to work in FCoE mode. When connected to a converged adapter in the server, this switch can route CEE packets to an upstream FCoE switch, which can then route the packets to the LAN or SAN.

SFP+ Transceivers and Copper Cables

The Cisco Nexus 4001I Switch Module does not ship with either SFP+ Transceivers or SFP+ Copper Cables. These can be ordered directly from Cisco Systems or authorized Cisco Systems resellers as listed in Table 2. Certain transceivers can also be ordered directly from IBM.

Table 2. SFP+ Transceivers and Copper Cables
DescriptionIBM part numberCisco part number
Cisco 10GBASE-SR SFP+ Transceiver88Y6054SFP-10G-SR(=)
Cisco 10GBASE-LR SFP+ TransceiverNot availableSFP-10G-LR(=)
Cisco 10GBASE-CU SFP+ Cable 1 MeterNot availableSFP-H10GB-CU1M(=)
Cisco 10GBASE-CU SFP+ Cable 3 MeterNot availableSFP-H10GB-CU3M(=)
Cisco 10GBASE-CU SFP+ Cable 5 MeterNot availableSFP-H10GB-CU5M(=)
Cisco 1000BASE-T SFP Transceiver88Y6058GLC-T(=)
Cisco 1000BASE-SX SFP Transceiver (GE SFP, LC connector)88Y6062GLC-SX-MM(=)
Cisco GE SFP, LC connector LX/LH transceiverNot availableGLC-LH-SM(=)

The Cisco 10GBASE-SR SFP+ Transceiver (88Y6054) supports a link length of 26 meters on standard Fiber Distributed Data Interface (FDDI)-grade multimode fiber (MMF).

The Cisco 1000BASE-T SFP Transceiver (88Y6058) operates on standard Category 5 unshielded twisted pair copper cabling of up to 100 meters (328 ft) link length. Cisco 1000BASE-T SFP modules support 10/100/1000 autonegotiation and Auto MDI/MDIX.

The Cisco 1000BASE-SX SFP Transceiver (88Y6062) is compatible with the IEEE 802.3z 1000BASE-SX standard and operates on 50 µm multimode fiber links up to 550 m and on 62.5 µm Fiber Distributed Data Interface (FDDI)-grade multimode fibers up to 220 m. It can support up to 1km over laser-optimized 50 µm multimode fiber cable.


Benefits

The benefits of the Cisco Nexus 4001I Switch Module include the following:
  • Lower total cost of operation (TCO): Deployment of Unified Fabric with the Nexus 4001I on the blade server access leverages a significant reduction in the number of switches, network interface cards (LAN and SAN), ports, optic modules, and cables. This consolidation of server access network elements significantly reduces the overall capital and operation costs of the data center network through the reduction of network elements to purchase, manage, power, and cool.
  • High performance: Nexus 4001I is a line rate, feature-rich, extremely low-latency switch capable of enabling server access migration from 1 GbE to 10 GbE to lossless 10 GbE, as well as, supporting the demanding latency requirements of High Performance Compute clusters or high-frequency trading applications.
  • Enhanced server virtualization: Utilizing Unified Fabric on the server access with Nexus 4001I provides uniform interfaces, simplified cabling, and consistent server access design required to leverage the advantages of automated virtual machine mobility. Using the Nexus 4001I in conjunction with the Nexus 1000V delivers the most operationally consistent and transparent server access design for virtual machine deployments, substantially reducing the overhead to configure, troubleshoot, and repair the server access link between the vNIC, virtual switch, and blade switch.
  • Increased resilience: The Nexus 4001I extends NX-OS to blade server access, providing a fault-tolerant network with a single modular operating system across the data center.


Features and specifications

The Cisco Nexus 4001I Switch Module includes the following features and functions:
  • Form-factor
    • Single-height high-speed switch module
  • External ports
    • Six 10 Gb SFP+ ports operating at wire speed. Also designed to support 1 Gb SFP if required, with the flexibility of mixing 1 Gb/10 Gb. Table 2 lists supported transceivers and cables.
    • One 10/100/1000 Mb copper RJ-45 used for out-of-band management.
    • An RS-232 RJ-45 connector for serial port that provides an additional means to configure the switch module. The console cable is supplied with the switch module.
  • Internal ports
    • Fourteen internal auto-negotiating ports: 1 Gb or 10 Gb to the server blades
    • One internal full-duplex 100 Mbps port connected to the management module
  • Scalability and performance
    • Autosensing 1 Gb/10 Gb internal and external Ethernet ports for bandwidth optimization
    • Non-blocking architecture with wire-speed forwarding of traffic and full line rate performance of 400 Gbps full duplex
    • Forwarding rate of 300 million packets per second (mpps)
    • Low, predictable, and consistent latency of 1.5 microseconds regardless of packet size, traffic pattern, or enabled features on 10 Gigabit Ethernet interface
    • Media access control (MAC) address learning: automatic update, supports up to 8 Kb MAC addresses
    • EtherChannels and LACP (IEEE 802.3ad) link aggregation, up to 60 Gb of total uplink bandwidth per switch, up to seven trunk groups, and up to six ports per group
    • Support for jumbo frames (up to 9216 bytes)
    • Traffic suppression (unicast, multicast, and broadcast)
    • IGMP snooping to limit flooding of IP multicast traffic (IGMP V2 and V3)
    • Configurable traffic distribution schemes over EtherChannel links based on source/destination IP addresses, MAC addresses, or ports
    • Spanning Tree edge ports (formerly PortFast) for rapid STP convergence
  • Availability and redundancy
    • IEEE 802.1D-2004 Rapid and Multiple Spanning Tree Protocols (802.1w and 802.1s)
    • Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on blades
  • VLAN support
    • Up to 512 VLANs supported per switch; VLAN numbers ranging from 1 to 4000
    • 802.1Q VLAN tagging support on all ports
    • Private VLANs
  • Security
    • VLAN-based, MAC-based, and IP-based access control lists (ACLs)
    • Role-based access control
    • Radius, TACACS+
  • Quality of service (QoS)
    • Support for IEEE 802.1p CoS, IP ToS/DSCP, Protocol, IP Real Time Protocol, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing
    • Traffic shaping and re-marking based on defined policies
    • Eight Weighted Round Robin (WRR) priority queues per port for processing qualified traffic
  • Fibre Channel over Ethernet
    • Support for T11-compliant FCoE on all 10-Gigabit Ethernet interfaces
    • FCoE Initialization Protocol (FIP): Converged Enhanced Ethernet Data Center Bridging Exchange (CEE-DCBX) protocol supports T11-compliant Gen-2 CNAs
    • 802.1Q VLAN tagging for FCoE frames
    • Priority-based flow control (IEEE 802.1Qbb) simplifies management of multiple traffic flows over a single network link and creates lossless behavior for Ethernet by allowing class-of-service (CoS)-based flow control
    • Enhanced Transmission Selection (IEEE 802.1Qaz) enables consistent management of QoS at the network level by providing consistent scheduling of different traffic types (IP, storage, and so on)
    • Data Center Bridging Exchange (DCBX) Protocol (IEEE 802.1AB) simplifies network deployment and reduces configuration errors by providing autonegotiation of IEEE 802.1 DCB features between the network interface card (NIC) and the switch and between switches
  • Manageability
    • Command-line interface (CLI): You can configure switches using the CLI from an SSH V2 session, a Telnet session, or the console port. SSH provides a secure connection to the device.
    • XML Management Interface over SSH: You can configure switches using the XML management interface, which is a programming interface based on the NETCONF protocol that complements the CLI functionality. For more information, see the Cisco NX-OS XML Interfaces User Guide.
    • Cisco Data Center Manager support.
  • Monitoring
    • Switch LEDs for external port status and switch module status indication.
    • RMON.
    • Change tracking and remote logging with syslog feature.
    • Online diagnostics.
    • Cisco Fabric Services.
    • Session Manager.
  • Special functions
    • Serial over LAN (SOL)

The switch module supports the following IEEE standards:

● IEEE 802.1D: Spanning Tree Protocol
● IEEE 802.1p: CoS Prioritization
● IEEE 802.1Q: VLAN Tagging
● IEEE 802.1s: Multiple VLAN Instances of Spanning Tree Protocol
● IEEE 802.1w: Rapid Reconfiguration of Spanning Tree Protocol
● IEEE 802.3: Ethernet
● IEEE 802.3ad: Link Aggregation Control Protocol (LACP)
● IEEE 802.3ae: 10 Gigabit Ethernet


Supported BladeCenter chassis and expansion cards

The Cisco Nexus 4001I Switch Module is supported in the IBM BladeCenter chassis listed in Table 3.

Table 3. IBM BladeCenter chassis that supports the Cisco Nexus 4001I Switch Module

I/O module

Part number

BladeCenter S

BladeCenter E

BladeCenter H

BladeCenter T

BladeCenter HT

MSIM

MSIM-HT

Cisco Nexus 4001I Switch Module

46M6071

N

N

Y

N

Y

N

N

The Cisco Nexus 4001I Switch Module supports the expansion cards listed in Table 4. Table 4 also lists the chassis bays in which the switch module must be installed when used with each expansion card.

Table 4. Cisco Nexus 4001I Switch Module and BladeCenter chassis I/O bays support

Description

Part number

Bay 1 (Standard)

Bay 2 (Standard)

Bay 3 (Standard)

Bay 4 (Standard)

Bay 5 (Bridge)

Bay 6 (Bridge)

Bay 7 (High-speed)

Bay 8 (High-speed)

Bay 9 (High-speed)

Bay 10 (High-speed)

Gigabit Ethernet integrated in the server

None

N

N

N

N

N

N

N

N

N

N

Ethernet Expansion Card (CFFv)

39Y9310

N

N

N

N

N

N

N

N

N

N

Ethernet Expansion Card (CIOv)

44W4475

N

N

N

N

N

N

N

N

N

N

QLogic Ethernet and 4Gb FC Card (CFFh)

39Y9306

N

N

N

N

N

N

N

N

N

N

2/4 Port Ethernet Expansion Card (CFFh)

44W4479

N

N

N

N

N

N

Y

Y

Y

Y

QLogic Ethernet and 8Gb FC Card (CFFh)

44X1940

N

N

N

N

N

N

N

N

N

N

NetXen 10Gb Ethernet Expansion Card (CFFh)

39Y9271

N

N

N

N

N

N

Y

N

Y

N

Broadcom 2-port 10Gb Ethernet Exp. Card (CFFh)

44W4466

N

N

N

N

N

N

Y

N

Y

N

Broadcom 4-port 10Gb Ethernet Exp. Card (CFFh)

44W4465

N

N

N

N

N

N

Y

Y

Y

Y

Broadcom 10 Gb Gen 2 2-port Ethernet Expansion Card (CFFh)

46M6168

N

N

N

N

N

N

Y

N

Y

N

Broadcom 10 Gb Gen 2 4-port Ethernet Expansion Card (CFFh)

46M6164

N

N

N

N

N

N

Y

Y

Y

Y

QLogic 2-port 10Gb Converged Network Adapter (CFFh)

42C1830

N

N

N

N

N

N

Y

N

Y

N

Emulex Virtual Fabric Adapter (CFFh)

49Y4235

N

N

N

N

N

N

Y*

N

Y*

N
* The Cisco Nexus 4001I Switch Module supports the Emulex Virtual Fabric Adapter (CFFh) only in the physical NIC (pNIC) mode of the adapter

The BladeCenter chassis have the following bays:
  • BladeCenter S, E, and T have four standard I/O bays (1, 2, 3, and 4).
  • BladeCenter H has four standard I/O bays (1, 2, 3, and 4), two bridge bays (5 and 6), and four high-speed bays (7, 8, 9, and 10).
  • BladeCenter HT has four standard I/O bays (1, 2, 3, and 4) and four high-speed bays (7, 8, 9, and 10).

The Cisco Nexus 4001I Switch Module fits in one of the high-speed I/O bay (bays 7-10).


Popular configurations

The Cisco Nexus 4001I Switch Module can be used in various configurations.

Fibre Channel over Ethernet configuration

Figure 2 shows the use of Cisco Nexus 4001I Switch Modules to route two Ethernet ports from the QLogic 2-port 10Gb Converged Network Adapter (CFFh) installed into each server. Two Cisco Nexus 4001I Switch Modules are installed in bay 7 and bay 9 of the BladeCenter H chassis. All connections between the controller, card, and the switch modules are internal to the chassis. No cabling is needed.

The Cisco Nexus 4001I Switch Modules are connected to the Cisco Nexus 5000 TOR switches that can have native FC interfaces for storage attachments (not shown).

A 20Gb solution using two BNT 10-port 10 Gb Ethernet Switch Modules
Figure 2. FCoE solution using two Cisco Nexus 4001I Switch Modules

Table 5 lists the components used in this configuration.

Table 5. Components used when connecting QLogic 2-port 10Gb Converged Network Adapter (CFFh) to two Cisco Nexus 4001I Switch Modules
Diagram referencePart number/ machine typeDescriptionQuantity
1VariesIBM BladeCenter HS22 or other supported server1 to 14
242C1830QLogic 2-port 10Gb Converged Network Adapter (CFFh)1 per server
38852 or 8740/8750BladeCenter H or BladeCenter HT1
446M6071Cisco Nexus 4001I Switch Module2
549Y9983Software Upgrade License for Cisco Nexus 4001I2
VariesExternal cables to connect to Cisco Nexus 5000 (not shown)Up to 12*
*The Cisco Nexus 4001I Switch Module has six external 10 Gb ports. To communicate outside of the chassis, you must have either one SFP+ transceiver or SFP+ direct-attached cable (DAC) connected. You have the flexibility to expand your bandwidth as you see fit using from one to six connections per switch.

4-port 10 Gb Ethernet LAN-only configuration

Figure 3 shows the use of Cisco Nexus 4001I Switch Modules to route four Ethernet ports from the Broadcom 4-port 10Gb Ethernet Expansion Card (CFFh) installed into each server. Four Cisco Nexus 4001I Switch Modules are installed in bay 7, bay 8, bay 9, and bay 10 of the BladeCenter H chassis. All connections between the controller, card, and switch modules are internal to the chassis. No cabling is needed.

A 40 Gb solution using four BNT 10-port 10Gb Ethernet Switch Modules
Figure 3. A 40 Gb solution using four Cisco Nexus 4001I Switch Modules

Table 6 lists the components used in this configuration.

Table 6. Components used when connecting Broadcom 4-port 10Gb Ethernet Expansion Card (CFFh) to four Cisco Nexus 4001I Switch Modules
Diagram referencePart number/ machine typeDescriptionQuantity
1VariesIBM BladeCenter HS22 or other supported server1 to 14
244W4465Broadcom 4-port 10Gb Ethernet Expansion Card (CFFh)1 per server
38852 or 8740/8750BladeCenter H or BladeCenter HT1
446M6071Cisco Nexus 4001I Switch Module4
5VariesExternal cables to connect to external LAN infrastructure (not shown)Up to 24*
*The Cisco Nexus 4001I Switch Module has six external 10 Gb ports. In order to communicate outside of the chassis you must have either one SFP+ transceiver or SFP+ direct-attached cable (DAC) connected. You have the flexibility to expand your bandwidth as you see fit using from one to six connections per switch.


Connectors and LEDs

Figure 4 shows the front panel of the Cisco Nexus 4001I Switch Module .

Front panel of the Cisco Nexus 4001I Switch Module
Figure 4. Front panel of the Cisco Nexus 4001I Switch Module

The front panel contains the components listed in Table 7.

Table 7. Callouts in Figure 4
CalloutDescription
1, 2, 3, 4, 5, 610 Gb Ethernet Small Form Factor Pluggable (SFP+) ports
7, 10Release latches
8Out-of-band management RJ45 port (labeled Management); supports 10/100/1000 Mbps speeds
9Serial console port has RJ45 connector for management console (labeled Console)


Network cabling requirements

The network cables required for the switch module are:
  • 10GBASE-SR
    • 850 nm communication using multimode fiber cable (50 µ or 62.5 µ) up to 300 m, LC duplex connector
    • Requires 10 GbE SFP+ transceiver modules, part number SFP-10G-SR(=)
  • 10GBASE-LR
    • 1310 nm communication using single-mode fiber cable up to 10 km, LC duplex connector
    • Requires 10 GbE SFP+ transceiver modules, part number SFP-10G-LR(=)
  • 1000BASE-X:
    • 850 nm communication using multimode fiber cable (50 µ or 62.5 µ) up to 550 m, LC duplex connector (requires GbE SFP transceiver modules, part number GLC-SX-MM(=))
    • 1310 nm communication using single-mode fiber cable up to 10 km, LC duplex connector (requires GbE SFP transceiver modules, part number GLC-LH-SM(=)
  • 1000BASE-T:
    • UTP Category 6
    • UTP Category 5e (100 meters maximum)
    • UTP Category 5 (100 meters maximum)
    • EIA/TIA-568B 100-ohm STP (100 meters maximum)
    • Requires GbE SFP transceiver modules, part number GLC-T(=)
  • RS-232 serial cable: RJ-45-to-DB-9 console cable that comes with the switch module


Related publications

For more information see the following documents:

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
19 October 2009

Last Update
25 July 2013


Rating:
(based on 1 review)


Author(s)

IBM Form Number
TIPS0754