IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch

Product Guide


Lenovo is open and ready for business with System x
IBM x86 products are now products of Lenovo in the U.S. and other countries. IBM will continue to host x86-related content on ibm.com until migrated to Lenovo.
Learn more about the acquisition

Abstract

The IBM® Flex System™ Fabric CN4093 10Gb Converged Scalable Switch provides unmatched scalability, port flexibility, performance, convergence, and network virtualization, while also delivering innovations to help address a number of networking concerns today and providing capabilities that will help you prepare for the future. The switch offers full Layer 2/3 switching as well as FCoE Full Fabric and Fibre Channel NPV Gateway operations to deliver a truly converged integrated solution, and it is designed to install within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help clients migrate to a 10 GbE or 40 GbE converged Ethernet infrastructure and offers virtualization features like Virtual Fabric and VMready®.

Changes in the December 1 update:
* IBM Flex System CN4052 2-port 10Gb Virtual Fabric Adapter now supported
* IBM Flex System CN4058S 8-port 10Gb Virtual Fabric Adapter now supported

Contents


Table of contents


Introduction

The IBM® Flex System™ Fabric CN4093 10Gb Converged Scalable Switch provides unmatched scalability, port flexibility, performance, convergence, and network virtualization, while also delivering innovations to help address a number of networking concerns today and providing capabilities that will help you prepare for the future. The switch offers full Layer 2/3 switching, transparent "easy connect" mode, as well as FCoE Full Fabric and Fibre Channel NPV Gateway operations to deliver a truly converged integrated solution, and it is designed to install within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help clients migrate to a 10 GbE or 40 GbE converged Ethernet infrastructure and offers virtualization features like Virtual Fabric and VMready®.

Figure 1 shows the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch.

IBM Flex System Fabric EN4093 10Gb Scalable Switch
Figure 1. IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch

Did you know

The CN4093 offers up to 12 external IBM Omni Ports, which provide extreme flexibility with the choice of SFP+ based 10 Gb Ethernet connectivity or 4/8 Gb Fibre Channel connectivity, depending on the SFP+ module used.

The base switch configuration comes standard with 22x 10 GbE port licenses that can be assigned to internal connections or external SFP+, Omni or QSFP+ ports with flexible port mapping. For example, this feature allows you to trade off four 10 GbE ports for one 40 GbE port (or vice versa) or trade off one external 10 GbE SFP+ or Omni port for one internal 10 GbE port (or vice versa). You then have the flexibility of turning on more ports when you need them using IBM Features on Demand upgrade licensing capabilities that provide “pay as you grow” scalability without the need to buy additional hardware.

Part number information

The CN4093 switch is initially licensed for 22x 10 GbE ports. Further ports can be enabled with Upgrade 1 and Upgrade 2 license options. Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability. Table 1 shows the part numbers for ordering the switches and the upgrades.

Table 1. Part numbers and feature codes for ordering

DescriptionPart numberFeature code
(x-config / e-config)
Switch module
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch00D5823A3HH / ESW2
Features on Demand upgrades
IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1)00D5845A3HL / ESU1
IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 2)00D5847A3HM / ESU2

The part number for the switch includes the following items:
  • One IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
  • Important Notices Flyer
  • Warranty Flyer
  • Technical Update Flyer
  • Documentation CD-ROM

Note: Neither QSFP+ nor SFP+ transceivers or cables are included with the switch. They must be ordered separately (See Table 3).

The switch does not include a serial management cable. However, IBM Flex System Management Serial Access Cable, 90Y9338, is supported and contains two cables, a mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be used to connect to the switch locally for configuration tasks and firmware updates.

The part numbers for the upgrades, 00D5845 and 00D5847, include the following items:
  • Features on Demand Activation Flyer
  • Upgrade authorization key

The base switch and upgrades are as follows:
  • 00D5823 is the part number for the base switch, and it comes with 14 internal 10 GbE ports enabled (one to each node bay), two external 10 GbE SFP+ ports enabled, and six Omni Ports enabled to connect to either Ethernet or Fibre Channel networking infrastructure, depending on the SFP+ transceiver or DAC cable used.
  • 00D5845 (Upgrade 1) can be applied on the base switch when you need more external bandwidth with two 40 GbE QSFP+ ports that can be converted into 4x 10 GbE SFP+ links each with the optional break-out cables. This upgrade also enables 14 additional internal ports, for a total of 28 internal ports, to provide more bandwidth to the compute nodes leveraging 4-port expansion cards. This takes full advantage of four-port adapter cards installed in each compute node and requires the base switch.
  • 00D5847 (Upgrade 2) can be applied on the base switch when you need more external Omni Ports on the switch or if you want additional internal bandwidth to the node bays. The upgrade will enable the remaining six external Omni Ports, plus 14 additional internal 10 GbE ports, for a total of 28 internal ports, to provide more bandwidth to the compute nodes leveraging four-port expansion cards. This takes full advantage of four-port adapter cards installed in each compute node and requires the base switch.
  • Both 00D5845 (Upgrade 1) and 00D5847 (Upgrade 2) can be applied on the switch at the same time to allow you to use 42 internal 10 GbE ports leveraging six ports on an eight-port expansion card, and to utilize all external ports on the switch.

Flexible port mapping: With IBM Networking OS version 7.8 or later clients have more flexibility in assigning ports that they have licensed on the CN4093 which can help eliminate or postpone the need to purchase upgrades. While the base model and upgrades still activate specific ports, flexible port mapping provides clients with the capability of reassigning ports as needed by moving internal and external 10 GbE ports and Omni Ports, or trading off four 10 GbE ports for the use of an external 40 GbE port. This is very valuable when you consider the flexibility with the base license and with Upgrade 1 or Upgrade 2.

Note: Flexible port mapping is not available in Stacking mode.

With flexible port mapping, clients have licenses for a specific number of ports:
  • 00D5823 is the part number for the base switch, and it provides 22x 10 GbE port licenses that can enable any combination of internal and external 10 GbE ports and Omni Ports and external 40 GbE ports (with the use of four 10 GbE port licenses per one 40 GbE port).
  • 00D5845 (Upgrade 1) upgrades the base switch by activation of 14 internal 10 GbE ports and two external 40 GbE ports which is equivalent to adding 22 more 10 GbE port licenses for a total of 44x 10 GbE port licenses. Any combination of internal and external 10 GbE ports and Omni Ports and external 40 GbE ports (with the use of four 10 GbE port licenses per one 40 GbE port) can be enabled with this upgrade. This upgrade requires the base switch.
  • 00D5847 (Upgrade 2) upgrades the base switch by activation of 14 internal 10 GbE ports and six external Omni Ports which is equivalent to adding 20 more 10 GbE port licenses for a total of 42x 10 GbE port licenses. Any combination of internal and external 10 GbE ports and Omni Ports and external 40 GbE ports (with the use of four 10 GbE port licenses per one 40 GbE port) can be enabled with this upgrade. This upgrade requires the base switch.
  • Both 00D5845 (Upgrade 1) and 00D5847 (Upgrade 2) simply activate all the ports on the CN4093 which is 42 internal 10 GbE ports, two external SFP+ ports, 12 external Omni Ports and two external QSFP+ ports.

Note: When both Upgrade 1 and Upgrade 2 are activated, flexible port mapping is no longer used because all the ports on the CN4093 are enabled.

Table 2 lists supported port combinations on the switch and required upgrades.

Table 2. Supported port combinations (Part 1: Default port mapping)
Supported port combinations
Quantity required
Base switch, 00D5823Upgrade 1, 00D5845Upgrade 2, 00D5847
  • 14x internal 10 GbE ports
  • 2x external 10 GbE SFP+ ports
  • 6x external SFP+ Omni Ports
1
0
0
  • 28x internal 10 GbE ports
  • 2x external 10 GbE SFP+ ports
  • 6x external SFP+ Omni Ports
  • 2x external 40 GbE QSFP+ ports
1
1
0
  • 28x internal 10 GbE ports
  • 2x external 10 GbE SFP+ ports
  • 12x external SFP+ Omni Ports
1
0
1
  • 42x internal 10 GbE ports†
  • 2x external 10 GbE SFP+ ports
  • 12x external SFP+ Omni Ports
  • 2x external 40 GbE QSFP+ ports
1
1
1
† This configuration leverages six of the eight ports on the CN4058 adapter available for IBM Power Systems™ compute nodes.

Table 2. Supported port combinations (Part 2: Flexible port mapping*)
Supported port combinations
Quantity required
Base switch, 00D5823Upgrade 1, 00D5845Upgrade 2, 00D5847
  • 22x 10 GbE ports (internal and external SFP+ and Omni Ports)
    or
  • 18x 10 GbE ports (internal and external SFP+ and Omni Ports)
  • 1x external 40 GbE QSFP+ ports
    or
  • 14x 10 GbE ports (internal and external SFP+ and Omni Ports)
  • 2x external 40 GbE QSFP+ ports
1
0
0
  • 44x 10 GbE ports (internal and external SFP+ and Omni Ports)
    or
  • 40x 10 GbE ports (internal and external SFP+ and Omni Ports)
  • 1x external 40 GbE QSFP+ ports
    or
  • 36x 10 GbE ports (internal and external SFP+ and Omni Ports)
  • 2x external 40 GbE QSFP+ ports
1
1
0
  • 42x 10 GbE ports (internal and external SFP+ and Omni Ports)
    or
  • 38x 10 GbE ports (internal and external SFP+ and Omni Ports)
  • 1x external 40 GbE QSFP+ ports
    or
  • 34x 10 GbE ports (internal and external SFP+ and Omni Ports)
  • 2x external 40 GbE QSFP+ ports
1
0
1
* Flexible port mapping is available in IBM Networking OS 7.8 or later.

Supported transceivers and cables

Table 3 lists the supported cables and transceivers.

Table 3. Supported transceivers and direct-attach cables

DescriptionPart numberFeature code
(x-config / e-config)
Serial console cables
IBM Flex System Management Serial Access Cable Kit90Y9338A2RR / None
SFP transceivers - 1 GbE (only supported in two dedicated SFP+ ports, not in Omni Ports)
IBM SFP RJ-45 Transceiver (only supports 1 GbE, does not support 10/100 Mbps)81Y16183268 / EB29
IBM SFP 1000Base-T (RJ-45) Transceiver (does not support 10/100 Mbps)00FE333A5DL / EB29
IBM SFP SX Transceiver81Y16223269 / EB2A
IBM SFP LX Transceiver90Y9424A1PN / ECB8
SFP+ transceivers - 10 GbE (supported in SFP+ ports and Omni Ports)
IBM SFP+ SR Transceiver46C34475053 / EB28
IBM SFP+ LR Transceiver90Y9412A1PM / ECB9
10GBase-SR SFP+ (MMFiber) transceiver44W44084942 / 3282
SFP+ direct-attach cables - 10 GbE (supported in SFP+ ports and Omni Ports)
1m IBM Passive DAC SFP+ Cable90Y9427A1PH / ECB4
1.5m IBM Passive DAC SFP+ Cable00AY764A51N / None
2m IBM Passive DAC SFP+ Cable00AY765A51P / None
3m IBM Passive DAC SFP+ Cable90Y9430A1PJ / ECB5
5m IBM Passive DAC SFP+ Cable90Y9433A1PK / ECB6
7m IBM Passive DAC SFP+ Cable (only supported in SFP+ ports, not supported in Omni Ports)00D6151A3RH / ECBH
QSFP+ transceiver and cables - 40 GbE (supported in QSFP+ ports)
IBM QSFP+ 40GBASE-SR4 Transceiver
(Requires either cable 90Y3519 or cable 90Y3521)
49Y7884A1DR / EB27
10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884)90Y3519A1MM / EB2J
30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884)90Y3521A1MN / EC2K
IBM QSFP+ 40GBASE-LR4 Transceiver00D6222A3NY / None
QSFP+ breakout cables - 40 GbE to 4x10 GbE (supported in QSFP+ ports)
1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable49Y7886A1DL / EB24
3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable49Y7887A1DM / EB25
5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable49Y7888A1DN / EB26
QSFP+ direct-attach cables - 40 GbE (supported in QSFP+ ports)
1m IBM QSFP+ to QSFP+ Cable49Y7890A1DP / EB2B
3m IBM QSFP+ to QSFP+ Cable49Y7891A1DQ / EB2H
5m IBM QSFP+ to QSFP+ Cable00D5810A2X8 / ECBN
7m IBM QSFP+ to QSFP+ Cable00D5813A2X9 / ECBP
SFP+ transceivers - 8 Gb FC (supported in Omni Ports)
IBM 8Gb SFP+ SW Optical Transceiver (supports 4/8 Gbps)44X19645075 / 3286

With the flexibility of the CN4093 switch, clients can take advantage of the technologies that they require for multiple environments:
  • For 1 GbE links (supported on external SFP+ ports 1 and 2 only), you can use 1 GbE SFP transceivers plus RJ-45 cables or LC-to-LC fiber cables, depending on the transceiver.
  • For 10 GbE (supported on external SFP+ ports), you can use direct-attached copper (DAC) SFP+ cables for in-rack cabling and distances up to 7 m. These DAC cables have SFP+ connectors on each end, and they do not need separate transceivers. For longer distances the 10GBASE-SR transceiver can support distances up to 300 meters over OM3 multimode fiber or up to 400 meters over OM4 multimode fiber with LC connectors. The 10GBASE-LR transceivers can support distances up to 10 kilometers on single mode fiber with LC connectors.
  • To increase the number of available 10 GbE ports, clients can split out four 10 GbE ports for each 40 GbE port using IBM QSFP+ DAC Breakout Cables for distances up to 5 meters. For distances up to 100 m, optical MTP-to-LC break-out cables can be used with the 40GBASE-SR4 transceiver, but IBM does not supply these optical breakout cables.
  • For 40 GbE to 40 GbE connectivity, clients can use the affordable IBM QSFP+ to QSFP+ DAC cables for distances up to 7 meters. For distances up to 100 m, the 40GBASE-SR4 QSFP+ transceiver can be used with OM3 multimode fiber with MTP connectors or up to 150 m when using OM4 multimode fiber with MTP connectors. For distances up to 10 km, the 40GBASE-LR QSFP+ transceiver can be used with single mode fiber with LC connectors.
  • For 4 Gb or 8 Gb FC links (supported on Omni Ports only), you can use 8 Gb FC SFP+ SW transceivers plus LC fiber optics cables. These transceivers can operate at 4 Gb or 8 Gb speeds.

Benefits

The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch is considered particularly suited for these clients:

  • Clients who want to implement a converged infrastructure with FCoE where the CN4093 acts as a Full Fabric FC/FCoE switch for the end-to-end FCoE configurations or as an integrated Fibre Channel Forwarder (FCF) NPV Gateway breaking out FC traffic within the chassis for the native Fibre Channel SAN connectivity.
  • Clients who are implementing a virtualized environment.
  • Clients who require investment protection for 40 GbE external ports.
  • Clients who want to reduce TCO and improve performance while maintaining high levels of availability and security.
  • Clients who want to avoid or minimize oversubscription, which can result in congestion and loss of performance.

The switch offers the following key capabilities and benefits:
  • Convergence and lower acquisition and operational costs

    One of the key trends that is driving the transformation of the data center is converging to a simplified networking infrastructure — collapsing Ethernet and Fibre Channel at the server and the edge of the network while maintaining connectivity upstream to existing LANs and SANs. The CN4093 Converged Switch supports multiple protocols, including Ethernet, Fibre Channel, iSCSI, and FCoE; IBM Omni Ports give clients the flexibility to choose between 10 Gb Ethernet external connections to the top-of-rack switch and 4/8 Gb Fibre Channel for flexible and scalable access to FC storage.

  • Increased performance

    With the growth of virtualization and the evolution of cloud, many of today’s applications require low latency and high bandwidth performance. The CN4093 is the embedded 10 GbE switch for a Flex System chassis to support sub-microsecond latency and up to 1.28 Tbps, while also delivering full line rate performance on Ethernet ports, making it ideal for managing dynamic workloads across the network. In addition, this switch provides a rich Layer 2 and Layer 3 feature set that is ideal for many of today’s data centers. Furthermore, it offers industry-leading external bandwidth by being the integrated switch to support 40 GbE external ports.

  • Pay as you grow investment protection and lower total cost of ownership

    The CN4093's flexible port mapping allows you to buy only the ports that you need, when you need them. The base switch configuration includes 22x 10 GbE port licenses that can be assigned to internal connections and external SFP+, Omni or 40 GbE QSFP+ ports (by using four 10 GbE licenses per one 40 GbE port). You then have the flexibility of turning on more internal 10 GbE connections and more external ports when you need them using Features on Demand licensing capabilities, that provide "pay as you grow" scalability without the need for additional hardware.

  • Cloud ready - Optimized network virtualization with virtual NICs

    With the majority of IT organizations implementing virtualization, there has been an increased need to reduce the cost and complexity of their environments. IBM is helping to address these requirements by removing multiple physical I/O ports. IBM Virtual Fabric provides a way for companies to carve up 10 GbE ports into virtual NICs (vNICs) to meet those requirements with Intel processor-based compute nodes.

    To help deliver maximum performance per vNIC and to provide higher availability and security with isolation between vNICs, the switch leverages capabilities of its IBM Networking Operating System. For large-scale virtualization, the IBM Flex System solution can support up to 32 vNICs by using a pair of CN4054 10Gb Virtual Fabric Adapters in each compute node and four CN4093 10Gb Converged Scalable Switches in the chassis.

    The CN4093 offers the benefits of IBM’s next-generation vNIC - Unified Fabric Port (UFP). UFP is an advanced, cost-effective solution that provides a flexible way for clients to allocate, reallocate, and adjust bandwidth to meet their ever-changing data center requirements.

  • Cloud ready - VM-aware networking

    Delivering advanced virtualization awareness helps simplify management and automates VM mobility by making the network VM aware with IBM VMready, which works with all the major hypervisors. For companies using VMware, IBM System Networking’s SDN for Virtual Environments (sold separately) enables network administrators to simplify management by having a consistent virtual and physical networking environment. With SDN VE, virtual and physical switches use the same configurations, policies, and management tools. Network policies migrate automatically along with virtual machines (VMs) to ensure that security, performance, and access remain intact as VMs move from compute node to compute node.

    Support for Edge Virtual Bridging (EVB) based on the IEEE 802.1Qbg standard enables scalable, flexible management of networking configuration and policy requirements per VM and eliminates many of the networking challenges introduced with server virtualization.

  • Simplify network infrastructure

    The CN4093 10Gb Converged Scalable Switch simplifies deployment and growth by using its innovative scalable architecture. This architecture helps increase return on investment by reducing the qualification cycle, while providing investment protection for additional I/O bandwidth requirements in the future. The extreme flexibility of the switch comes from the ability to turn on additional ports as required, both down to the compute node and for upstream connections (including 40 GbE). Also, as you consider migrating to a converged LAN and SAN, the CN4093 supports Omni Ports for Ethernet or FC connectivity, and it can operate as an integrated FCF, which can be leveraged in an FCoE converged environment.

    CN4093 hybrid stacking capabilities simplify management for clients by stacking up to eight switches (two CN4093 and from two to six EN4093/EN4093R switches) that share one IP address and one management interface. Support for Switch Partition (SPAR) allows clients to virtualize the switch with partitions that isolate communications for multi-tenancy environments.

  • Transparent networking capability

    With a simple configuration change to "easy connect" mode, the CN4093 becomes a transparent network device, invisible to the core, eliminating network administration concerns of Spanning Tree Protocol configuration/interoperability, VLAN assignments and avoidance of possible loops.

    By emulating a host NIC to the data center core, it accelerates the provisioning of VMs by eliminating the need to configure the typical access switch parameters.

  • Integrate network management

    A key challenge is the management of a discrete network environment. The CN4093 supports a command-line interface (CLI) for integration into existing scripting and automation. Network management can be simplified by using port profiles, topology views, and virtualization management.

    For more advanced levels of management and control, IBM System Networking Switch Center (SNSC) can significantly reduce deployment and day-to-day maintenance times, while providing in-depth visibility into the network performance and operations of IBM switches. Furthermore, when leveraging tools like VMware vCenter Server or vSphere, SNSC provides additional integration for better optimization.


Features and specifications

The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch has the following features and specifications:

  • Internal ports
    • Forty-two internal full-duplex 10 Gigabit ports
    • Two internal full-duplex 1 GbE ports connected to the Chassis Management Module
  • External ports
    • Two ports for 1 Gb or 10 Gb Ethernet SFP/SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, 10GBASE-LR, or SFP+ direct-attach copper (DAC) cables. SFP+ modules and DAC cables are not included and must be purchased separately.
    • Twelve IBM Omni Ports, each of which can operate as a 10 Gb Ethernet (support for 10GBASE-SR, 10GBASE-LR, or 10 GbE SFP+ DAC cables), or auto-negotiating 4/8 Gb Fibre Channel, depending on the SFP+ transceiver installed in the port. SFP+ modules and DAC cables are not included and must be purchased separately.

      Note: Omni Ports do not support 1 Gb SFP Ethernet transceivers.

    • Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DAC cables. In addition, you can use break-out cables to break out each 40 GbE port into four 10 GbE SFP+ connections. QSFP+ modules and DAC cables are not included and must be purchased separately.
    • One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.
  • Scalability and performance
    • 40 Gb Ethernet ports for more external bandwidth and performance
    • Fixed-speed external 10 Gb Ethernet ports to leverage 10 GbE upstream infrastructure
    • Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps on Ethernet ports
    • Media access control (MAC) address learning: automatic update, support for up to 128,000 MAC addresses
    • Up to 128 IP interfaces per switch
    • Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total external bandwidth per switch, up to 64 trunk groups, up to 16 ports per group
    • Support for jumbo frames (up to 9,216 bytes)
    • Broadcast/multicast storm control
    • IGMP snooping to limit flooding of IP multicast traffic
    • IGMP filtering to control multicast traffic for hosts participating in multicast groups
    • Configurable traffic distribution schemes over trunk links based on source/destination IP or MAC addresses, or both
    • Fast port forwarding and fast uplink convergence for rapid STP convergence
  • Availability and redundancy
    • Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy
    • IEEE 802.1D STP for providing L2 redundancy
    • IEEE 802.1s Multiple STP (MSTP) for topology optimization; up to 32 STP instances are supported by a single switch
    • IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical delay-sensitive traffic such as voice or video
    • Per-VLAN Rapid STP (PVRST) enhancements
    • Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes
    • Hot Links provides basic link redundancy with fast recovery for network topologies that require Spanning Tree to be turned off
  • VLAN support
    • Up to 4095 VLANs supported per switch, with VLAN numbers ranging from 1 to 4095 (4095 is used for management module’s connection only.)
    • 802.1Q VLAN tagging support on all ports
    • Private VLANs
  • Security
    • VLAN-based, MAC-based, and IP-based access control lists (ACLs)
    • 802.1x port-based authentication
    • Multiple user IDs and passwords
    • User access control
    • Radius, TACACS+, and LDAP authentication and authorization
    • NIST 800-131A Encryption
    • Selectable encryption protocol; SHA 256 enabled as default
    • IPv6 ACL metering
  • Quality of Service (QoS)
    • Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and destination addresses, VLANs) traffic classification and processing
    • Traffic shaping and re-marking based on defined policies
    • Eight Weighted Round Robin (WRR) priority queues per port for processing qualified traffic
  • IP v4 Layer 3 functions
    • Host management
    • IP forwarding
    • IP filtering with ACLs; up to 896 ACLs supported
    • VRRP for router redundancy
    • Support for up to 128 static routes
    • Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4); up to 2048 entries in a routing table
    • Support for DHCP Relay
    • Support for IGMP snooping and IGMP relay
    • Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and Dense Mode (PIM-DM)
  • IP v6 Layer 3 functions
    • IPv6 host management (except default switch management IP address)
    • IPv6 forwarding
    • Up to 128 static routes
    • Support for OSPF v3 routing protocol
    • IPv6 filtering with ACLs
    • Virtual Station Interface Data Base (VSIDB) support
  • Virtualization
    • Virtual NIC (vNIC)
      • Ethernet, iSCSI, or FCoE traffic is supported on vNICs
    • Unified fabric port (UFP)
      • Ethernet or FCoE traffic is supported on UFPs
      • Supports up to 256 VLAN for the virtual ports
      • Integration with L2 failover
    • Virtual link aggregation groups (vLAGs)
    • 802.1Qbg Edge Virtual Bridging (EVB) is an emerging IEEE standard for allowing networks to become virtual machine (VM)-aware.
      • Virtual Ethernet Bridging (VEB) and Virtual Ethernet Port Aggregator (VEPA) are mechanisms for switching between VMs on the same hypervisor.
      • Edge Control Protocol (ECP) is a transport protocol that operates between two peers over an IEEE 802 LAN providing reliable, in-order delivery of upper layer protocol data units.
      • Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP) allows centralized configuration of network policies that will persist with the VM, independent of its location.
      • EVB Type-Length-Value (TLV) is used to discover and configure VEPA, ECP, and VDP.
    • VMready
    • Switch partitioning (SPAR)
      • SPAR forms separate virtual switching contexts by segmenting the data plane of the switch. Data plane traffic is not shared between SPARs on the same switch.
      • SPAR operates as a Layer 2 broadcast network. Hosts on the same VLAN attached to a SPAR can communicate with each other and with the upstream switch. Hosts on the same VLAN but attached to different SPARs communicate through the upstream switch.
      • SPAR is implemented as a dedicated VLAN with a set of internal compute node ports and a single external port or link aggregation (LAG). Multiple external ports or LAGs are not allowed in SPAR. A port can be a member of only one SPAR.
  • Converged Enhanced Ethernet
    • Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic based on the 802.1p priority value in each packet’s VLAN tag.
    • Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth based on the 802.1p priority value in each packet’s VLAN tag.
    • Data Center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities.
    • Multi-hop RDMA over Converged Ethernet (RoCE) with LAG support.
  • Fibre Channel and Fibre Channel over Ethernet (FCoE)
    • FC-BB-5 FCoE specification compliant
    • Native FC Forwarder (FCF) switch operations
    • End-to-end FCoE support (initiator to target)
    • FCoE Initialization Protocol (FIP) support
    • FCoE Link Aggregation Group (LAG) support
    • Optimized FCoE to FCoE forwarding
    • Omni Ports support 4/8 Gb FC when FC SFPs+ are installed in these ports
    • Support for F_port, E_Port ISL, NP_port and VF_port FC port types
    • Full Fabric mode for end-to-end FCoE or FCoE gateway; NPV Gateway mode for external FC SAN attachments (support for IBM B-type, Brocade, and Cisco MDS external SANs)
    • Sixteen buffer credits supported
    • Fabric Device Management Interface (FDMI)
    • NPIV support
    • Fabric Shortest Path First (FSPF)
    • Port security
    • Fibre Channel ping, debugging
    • Supports 2,000 secure FCoE sessions with FIP Snooping by using Class ID ACLs
    • Fabric services in Full Fabric mode:
      • Name Server
      • Registered State Change Notification (RSCN)
      • Login services
      • Zoning
  • Stacking
    • Hybrid stacking support (from two to six EN4093/EN4093R switches with two CN4093 switches) - single IP management
    • FCoE support
      • FCoE LAG on external ports
    • 802.1Qbg support
    • vNIC and UFP support
      • Support for UFP with 802.1Qbg
  • Manageability
    • Simple Network Management Protocol (SNMP V1, V2 and V3)
    • HTTP browser GUI
    • Telnet interface for CLI
    • SSH
    • Secure FTP (sFTP)
    • Service Location Protocol (SLP)
    • Serial interface for CLI
    • Scriptable CLI
    • Firmware image update (TFTP and FTP)
    • Network Time Protocol (NTP) for switch clock synchronization
  • Monitoring
    • Switch LEDs for external port status and switch module status indication
    • Remote Monitoring (RMON) agent to collect statistics and proactively monitor switch performance
    • Port mirroring for analyzing network traffic passing through the switch
    • Change tracking and remote logging with syslog feature
    • Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW analyzer required elsewhere)
    • POST diagnostics

The following features are not supported with IPv6:
  • Default switch management IP address
  • SNMP trap host destination IP address
  • Bootstrap Protocol (BOOTP) and DHCP
  • RADIUS, TACACS+, and LDAP
  • QoS metering and re-marking ACLs for out-profile traffic
  • VMware Virtual Center (vCenter) for VMready
  • Routing Information Protocol (RIP)
  • Internet Group Management Protocol (IGMP)
  • Border Gateway Protocol (BGP)
  • Virtual Router Redundancy Protocol (VRRP)
  • sFLOW

Standards supported

The CN4093 switch supports the following standards:

  • IEEE 802.1AB Data Center Bridging Capability Exchange Protocol (DCBX)
  • IEEE 802.1D Spanning Tree Protocol (STP)
  • IEEE 802.1p Class of Service (CoS) prioritization
  • IEEE 802.1s Multiple STP (MSTP)
  • IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled)
  • IEEE 802.1Qbg Edge Virtual Bridging
  • IEEE 802.1Qbb Priority-Based Flow Control (PFC)
  • IEEE 802.1Qaz Enhanced Transmission Selection (ETS)
  • IEEE 802.1x port-based authentication
  • IEEE 802.1w Rapid STP (RSTP)
  • IEEE 802.3 10BASE-T Ethernet
  • IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet
  • IEEE 802.3ad Link Aggregation Control Protocol
  • IEEE 802.3ae 10GBASE-KR backplane 10 Gb Ethernet
  • IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet
  • IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet
  • IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet
  • IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet
  • IEEE 802.3u 100BASE-TX Fast Ethernet
  • IEEE 802.3x Full-duplex Flow Control
  • IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet
  • IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet
  • SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable
  • FC-PH, Revision 4.3 (ANSI/INCITS 230-1994)
  • FC-PH, Amendment 1 (ANSI/INCITS 230-1994/AM1 1996)
  • FC-PH, Amendment 2 (ANSI/INCITS 230-1994/AM2-1999)
  • FC-PH-2, Revision 7.4 (ANSI/INCITS 297-1997)
  • FC-PH-3, Revision 9.4 (ANSI/INCITS 303-1998)
  • FC-PI, Revision 13 (ANSI/INCITS 352-2002)
  • FC-PI-2, Revision 10 (ANSI/INCITS 404-2006)
  • FC-PI-4, Revision 7.0
  • FC-FS, Revision 1.9 (ANSI/INCITS 373-2003)
  • FC-FS-2, Revision 0.91
  • FC-FS-3, Revision 1.11
  • FC-LS, Revision 1.2
  • FC-SW-2, Revision 5.3 (ANSI/INCITS 355-2001)
  • FC-SW-3, Revision 6.6 (ANSI/INCITS 384-2004)
  • FC-SW-5, Revision 8.5 (ANSI/INCITS 461-2010)
  • FC-GS-3, Revision 7.01 (ANSI/INCITS 348-2001)
  • FC-GS-4, Revision 7.91 (ANSI/INCITS 387-2004)
  • FC-GS-6, Revision 9.4,(ANSI/INCITS 463-2010)
  • FC-BB-5, Revision 2.0 for FCoE
  • FCP, Revision 12 (ANSI/INCITS 269-1996)
  • FCP-2, Revision 8 (ANSI/INCITS 350-2003)
  • FCP-3, Revision 4 (ANSI/INCITS 416-2006)
  • FC-MI, Revision 1.92 (INCITS TR-30-2002, except for FL-ports and Class 2)
  • FC-MI-2, Revision 2.6 (INCITS TR-39-2005)
  • FC-SP, Revision 1.6
  • FC-DA, Revision 3.1 (INCITS TR-36-2004)

Supported chassis and adapter cards

The switches are installed in switch bays in the rear of the IBM Flex System Enterprise Chassis, as shown in Figure 2. Switches are normally installed in pairs because I/O adapter cards installed in the compute nodes route to two switch bays for redundancy and performance.

Location of the switch bays in the IBM Flex System Enterprise Chassis
Figure 2. Location of the I/O bays in the IBM Flex System Enterprise Chassis

The midplane connections between the adapters installed in the compute nodes to the switch bays in the chassis are shown diagrammatically in the following figure. The figure shows both half-wide compute nodes, such as the x240 with two adapters, and full-wide compute nodes, such as the p460 with four adapters.

 Logical layout of the interconnects between I/O adapters and I/O modules
Figure 3. Logical layout of the interconnects between I/O adapters and I/O modules

The CN4093 switch can be installed in bays 1, 2, 3, and 4 of the Enterprise chassis. A supported adapter card must be installed in the corresponding slot of the compute node (slot A1 when switches are installed in bays 1 and 2 or slot A2 when switches are in bays 3 and 4). Each adapter can use up to four lanes to connect to the respective I/O module bay. The CN4093 is able to use up to three of the four lanes.

Prior to Networking OS 7.8, with four-port adapters, an optional Upgrade 1 (00D5845) or Upgrade 2 (00D5847) was required for the switch to allow communications on all four ports. With eight-port adapters, both optional Upgrade 1 (90Y3562) and Upgrade 2 (88Y6037) were required for the switch to allow communications on six adapter ports, and two remaining ports are not used. With IBM Networking OS 7.8 or later, there is no need to buy additional switch upgrades for 4-port and 8-port adapters if the total number of port licenses on the switch does not exceed the number of external (upstream network ports) and internal (compute node network ports) connections used.

In compute nodes that have an integrated dual-port 10 GbE network interface controller (NIC), NIC ports are routed to bays 1 and 2 with a specialized periscope connector, and the adapter card in slot A1 is not required. However, when needed, the periscope connector can be replaced with the adapter card. In this case, integrated NIC will be disabled.

Table 4 shows the connections between adapters installed in the compute nodes to the switch bays in the chassis.

Table 4. Adapter to I/O bay correspondence

I/O adapter slot
in the compute node
Port on the adapterCorresponding I/O module bay in the chassis
Bay 1Bay 2Bay 3Bay 4
Slot 1Port 1Yes
Port 2Yes
Port 3Yes
Port 4Yes
Port 5Yes
Port 6Yes
Port 7*
Port 8*
Slot 2Port 1Yes
Port 2Yes
Port 3Yes
Port 4Yes
Port 5Yes
Port 6Yes
Port 7*
Port 8*
Slot 3
(full-wide compute nodes only)
Port 1Yes
Port 2Yes
Port 3Yes
Port 4Yes
Port 5Yes
Port 6Yes
Port 7*
Port 8*
Slot 4
(full-wide compute nodes only)
Port 1Yes
Port 2Yes
Port 3Yes
Port 4Yes
Port 5Yes
Port 6Yes
Port 7*
Port 8*
* Ports 7 and 8 are routed to I/O bays 1 and 2 (Slot 1 and Slot 3) or 3 and 4 (Slot 2 and Slot 4), but these ports cannot be used with the CN4093.

Table 5 lists the I/O adapters supported by the CN4093 switch and those that are not supported.

Table 5. Supported network adapters
DescriptionPart
number
Feature code
(x-config / e-config)
Supported with CN4093
40 Gb Ethernet
IBM Flex System EN6132 2-port 40Gb Ethernet Adapter90Y3482A3HK / A3HKNo
10 Gb Ethernet
Embedded 10Gb Virtual Fabric AdapterNoneNone / NoneYes*
IBM Flex System CN4022 2-port 10Gb Converged Adapter88Y5920A4K3 / A4K3Yes
IBM Flex System CN4052 2-port 10Gb Virtual Fabric Adapter00JY800A5RP / NoneYes
IBM Flex System CN4054 10Gb Virtual Fabric Adapter (4-port)90Y3554A1R1 / NoneYes
IBM Flex System CN4054R 10Gb Virtual Fabric Adapter (4-port)00Y3306A4K2 / A4K2Yes
IBM Flex System CN4058 8-port 10Gb Converged AdapterNoneNone / EC24Yes
IBM Flex System CN4058S 8-port 10Gb Virtual Fabric Adapter94Y5160A4R6 / NoneYes
IBM Flex System EN4054 4-port 10Gb Ethernet AdapterNoneNone / 1762Yes
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter90Y3466A1QY / NoneNo
IBM Flex System EN4132 2-port 10Gb RoCE AdapterNoneNone / EC26No
1 Gb Ethernet
Embedded 1 Gb Ethernet controller (2-port)**NoneNone / NoneYes
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 49Y7900A10Y / 1763Yes
* The Embedded 10Gb Virtual Fabric Adapter is built into x222 nodes and certain models of the x240 and x440 nodes.
** The Embedded 1 Gb Ethernet controller is built into x220 nodes.

The adapters are installed in slots in each compute node. Figure 4 shows the locations of the slots in the x240 Compute Node. The positions of the adapters in the other supported compute nodes are similar.

 Location of the I/O adapter slots in the IBM Flex System x240 Compute Node
Figure 4. Location of the I/O adapter slots in the IBM Flex System x240 Compute Node

Connectors and LEDs

Figure 5 shows the front panel of the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch.

Front panel of the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
Figure 5. Front panel of the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch

The following components are located on the front panel:

  • LEDs that display the status of the switch module and the network:
    • The OK LED indicates that the switch module has passed the power-on self-test (POST) with no critical faults and is operational.
    • Identify: This blue LED can be used to identify the switch physically by illuminating via the management software.
    • The error LED (switch module error) indicates that the switch module has failed the POST or detected an operational fault.
  • One mini-USB RS-232 console port that provides an additional means to configure the switch module. This mini-USB-style connector enables connection of a special serial cable. (The cable is optional and it is not included with the switch. See the Part number information section for details.)
  • Two external SFP/SFP+ ports for 1 GbE or 10 GbE connections to external Ethernet devices.
  • Twelve external SFP+ Omni Ports for 10 GbE connections to the external Ethernet devices or 4/8 Gb FC connections to the external SAN devices.
  • Two external QSFP+ port connectors to attach QSFP+ transceivers or cables for a single 40 GbE connectivity or for splitting of a single port into 4x 10 GbE connections.
  • A link OK LED and a Tx/Rx LED for each external port on the switch module.
  • A mode LED for each pair of Omni Ports indicating the operating mode. (OFF indicates the port pair is configured for Ethernet operation, and ON indicates the port pair is configured for Fibre Channel operation.)

Network cabling requirements

The network cables that can be used with the CN4093 switch are shown in Table 6.

Table 6. CN4093 network cabling requirements

TransceiverStandardCableConnector
40 Gb Ethernet
IBM QSFP+ 40GBASE-SR4 Transceiver (49Y7884)40GBASE-SR4IBM MTP fiber optics cables up to 30 m (see Table 3); support for up to 100/150 m with OM3/OM4 multimode fiberMTP
IBM QSFP+ 40GBASE-LR4 Transceiver (00D6222)40GBASE-LR41310 nm single-mode fiber cable up to 10 kmLC
Direct attach cable40GBASE-CR4QSFP+ to QSFP+ DAC cables up to 7 m (see Table 3)QSFP+
10 Gb Ethernet
IBM SFP+ SR Transceiver (46C3447)10GBASE-SR850 nm OM3 multimode fiber cable (50 µ or 62.5 µ) up to 300 m or up to 400 m with OM4 multimode fiberLC
IBM SFP+ LR Transceiver (90Y9412)10GBASE-LR1310 nm single-mode fiber cable up to 10 kmLC
Direct attach cable10GSFP+CuSFP+ DAC cables up to 7 m (see Table 3)SFP+
1 Gb Ethernet
IBM SFP RJ-45 Transceiver (81Y1618, 00FE333)1000BASE-TUTP Category 5, 5E, and 6 up to 100 metersRJ-45
IBM SFP SX Transceiver (81Y1622)1000BASE-SX850 nm multimode fiber cable (50 µ or 62.5 µ) up to 550 mLC
IBM SFP LX Transceiver (90Y9424)1000BASE-LX1310 nm single-mode fiber cable up to 10 kmLC
8 Gb Fibre Channel
IBM 8Gb SFP+ SW Optical Transceiver (44X1964)FC-PI-4 (8GFC)850 nm multimode fiber, 50 µ (up to 150 meters) or 62.5 µ (up to 21 meters)LC
Management ports
External 1 GbE management port1000BASE-TUTP Category 5, 5E, and 6 up to 100 metersRJ-45
External RS-232 management portRS-232DB-9-to-mini-USB or RJ-45-to-mini-USB console cable (comes with optional Management Serial Access Cable, 90Y9338)Mini-USB

Warranty

The switch carries a 1-year, customer-replaceable unit (CRU) limited warranty. When installed in a chassis, the switch assumes your system’s base warranty and any IBM ServicePac® upgrade.

Physical specifications

These are the approximate dimensions and weight of the switch:

  • Height: 30 mm (1.2 in.)
  • Width: 401 mm (15.8 in.)
  • Depth: 317 mm (12.5 in.)
  • Weight: 3.7 kg (8.1 lb)

Shipping dimensions and weight (approximate):
  • Height: 114 mm (4.5 in.)
  • Width: 508 mm (20.0 in.)
  • Depth: 432 mm (17.0 in.)
  • Weight: 4.1 kg (9.1 lb)

Agency approvals

The switches conform to the following standards:

  • United States FCC 47 CFR Part 15, Subpart B, ANSI C63.4 (2003), Class A
  • IEC/EN 60950-1, Second Edition
  • Canada ICES-003, issue 4, Class A
  • Japan VCCI, Class A
  • Australia/New Zealand AS/NZS CISPR 22:2006, Class A
  • Taiwan BSMI CNS13438, Class A
  • CE Mark (EN55022 Class A, EN55024, EN61000-3-2, EN61000-3-3)
  • CISPR 22, Class A
  • China GB 9254-1998
  • Turkey Communique 2004/9; Communique 2004/22
  • Saudi Arabia EMC.CVG, 28 October 2002

Typical configurations

The following usage scenarios are described:

  • CN4093 FCoE Virtual Fabric in the Full Fabric mode (end-to-end FCoE)
  • CN4093 FCoE Virtual Fabric in the NPV Gateway mode (FC Forwarder)
  • CN4093 with flexible port mapping (4-port network adapter example)

CN4093 FCoE Virtual Fabric in the Full Fabric mode (end-to-end FCoE)

The CN4093 Virtual Fabric vNIC solution is based on the IBM Flex System Enterprise Chassis with a 10 Gb Converged Enhanced Ethernet (CEE) infrastructure and 10 Gb Virtual Fabric Adapters (VFAs) installed in each compute node. In Virtual Fabric mode, the CN4093 10 Gb switch is vNIC-aware, that is, the configuration of vNICs is done on a switch, then it propagates vNIC parameters to VFA using the DataCenter Bridging eXchange (DCBX) protocol. vNIC bandwidth allocation and metering is performed by both the switch and the VFA. In such a case, a bidirectional virtual channel of an assigned bandwidth is established between them for every defined vNIC. Up to 32 vNICs can be configured on a half-wide compute node.

In the Full Fabric mode, the CN4093 converged switch has 10 GbE external ports to the G8264 top-of-rack switch for external LAN connectivity, and is connected to the integrated IBM Flex System V7000 Storage Node with native FCoE interface via internal 10 GbE links, as shown in Figure 6.

 CN4093 Virtual Fabric in the Full Fabric mode (end-to-end FCoE)
Figure 6. CN4093 Virtual Fabric in the Full Fabric mode (end-to-end FCoE)

The solution components used in the scenario depicted in Figure 6 are listed in Table 7.

Table 7. Components used in an end-to-end FCoE solution with the CN4093 switch (Figure 6)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System end-to-end FCoE solution
IBM Flex System x240 Compute Node or other supported compute nodeVariesVaries
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per node
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch00D58232
IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1)*00D58451 per CN4093
2IBM Flex System V7000 Storage Node
IBM RackSwitch G8264
* The Upgrade 1 might not be needed with flexible port mapping if the total number of the internal and external ports used on the CN4093 is less or equal to 22.

Note: You also need SFP+ modules and optical cables or SFP+ DAC cables (not shown in Table 7, see Table 3 for details) for the external 10 Gb Ethernet connectivity.

CN4093 FCoE Virtual Fabric in the NPV Gateway mode

As part of an IBM FCoE solution, the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch can operate as an integrated FC Forwarder in the NPV Gateway mode, providing the capability to connect to the external IBM B-type, Brocade, or Cisco MDS storage networks, as shown in Figure 7 and Figure 8.

 CN4093 as an NPV Gateway connected to the IBM B-type SAN
Figure 7. CN4093 as an NPV Gateway connected to the IBM B-type SAN

 CN4093 as an NPV Gateway connected to the Cisco MDS SAN
Figure 8. CN4093 as an NPV Gateway connected to the Cisco MDS SAN

The solution components used in the scenarios depicted in Figure 7 and Figure 8 are listed in Table 8 and Table 9 respectively.

Table 8. CN4093 as an NPV Gateway connected to the IBM B-type SAN (Figure 7)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System FCoE solution
IBM Flex System x240 Compute Node or other supported compute nodeVariesVaries
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per compute node
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch00D58232
IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1)*00D58451 per CN4093
2IBM RackSwitch G8264
IBM B-type or Brocade SAN fabric
IBM System Storage FC disk controllers
IBM System Storage DS3000 / DS5000
IBM System Storage DS8000
IBM Storwize V7000 / SAN Volume Controller
IBM XIV
* The Upgrade 1 might not be needed with flexible port mapping if the total number of the internal and external ports used on the CN4093 is less or equal to 22.

Table 9. CN4093 as an NPV Gateway connected to the Cisco MDS SAN (Figure 8)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System FCoE solution
IBM Flex System x240 Compute Node or other supported compute nodeVariesVaries
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per compute node
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch00D58232
IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1)*00D58451 per CN4093
2IBM RackSwitch G8264
Cisco MDS SAN fabric
IBM System Storage FC disk controllers
IBM System Storage DS3000 / DS5000
IBM System Storage DS8000
IBM Storwize V7000 / SAN Volume Controller
IBM XIV
* The Upgrade 1 might not be needed with flexible port mapping if the total number of the internal and external ports used on the CN4093 is less or equal to 22.

Note: You also need SFP+ modules and optical cables or SFP+ DAC cables (not shown in Table 8 and Table 9; see Table 3 for details) for the external 10 Gb Ethernet connectivity, and SW 8 Gb FC SFP+ transceivers and optical cables (also identified in Table 3) for the external Fibre Channel connectivity.

IBM provides extensive FCoE testing to deliver network interoperability. For a full listing of IBM supported FCoE and iSCSI configurations, see the System Storage Interoperation Center (SSIC) website at:
http://ibm.com/systems/support/storage/ssic

CN4093 with flexible port mapping (4-port network adapter example)

Prior to IBM Networking OS 7.8, compute nodes with 4-port network adapters required Upgrade 1 or Upgrade 2 for the CN4093 to enable connectivity on all four adapter ports despite of number of compute nodes and external connections used. With the introduction of flexible port mapping in IBM Networking OS 7.8, if the Flex System chassis is not fully populated with the compute nodes that have four network ports, there might be no need to buy Upgrade 1.

Consider the following scenario. You are planning to install nine x240 compute nodes with CN4054 adapters or nine high-density x222 compute nodes that will be connected to two CN4093 switches installed in I/O bays 1 and 2. You are also planning to use two external 10 GbE ports on each CN4093 for the connectivity to the upstream network and two Omni Ports on each switch for FC SAN connectivity. In this scenario, the total number of 10 GbE ports and Omni Ports needed per one CN4093 is 22. The base switch supplies required 22 port licenses; therefore, the solution can be implemented without the need to buy Upgrade 1 or Upgrade 2.

Figure 10 illustrates this scenario.

EN4093R with 4-port network adapters
Figure 9. CN4093 with flexible port mapping (4-port network adapter example)

The solution components used in the scenario shown in Figure 10 are listed in Table 11.

Table 10. CN4093 with 4-port network adapters
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System x240 Compute Node or other supported compute nodeVariesUp to 9 per chassis
2IBM Flex System CN4054R 10Gb Virtual Fabric Adapter00Y33061 per compute node
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Enterprise Chassis with additional power supplies and fan modules if needed8721A1G1
IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch00D58232 per chassis
SFP+ modules and optical cables or SFP+ DAC cables (See Table 3)Varies4 (2 per CN4093)
IBM 8Gb SFP+ SW Optical Transceiver44X19644 (2 per CN4093)

Related publications and links

For more information see the following IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch product publications, available from the IBM Flex System Information Center:
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp

  • Installation Guide
  • Application Guide
  • Command Reference

These are other useful references:

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
13 November 2012

Last Update
01 December 2014


Rating:
(based on 1 review)


Author(s)

IBM Form Number
TIPS0910