IBM Flex System Fabric SI4093 System Interconnect Module

IBM Redbooks Product Guide

Abstract

The IBM® Flex System™ Fabric SI4093 System Interconnect Module enables simplified integration of IBM Flex System into your existing networking infrastructure. The SI4093 requires no management for most data center environments, eliminating the need to configure each networking device or individual ports, thus reducing the number of management points. The device provides a low latency, loop-free interface that does not rely upon spanning tree protocols, thus removing one of the greatest deployment and management complexities of a traditional switch. The SI4093 offers administrators a simplified deployment experience while maintaining the performance of intra-chassis connectivity, yet provides the simplicity of a single aggregated connection to the upstream network.

Changes in the March 10 update:
* Added 1.5 m, 2 m and 7 m passive DAC cables for 10 GbE
* Added 5 m and 7 m QSFP+ DAC cables
* Updated adapter support table

Contents


Table of contents


Introduction

The IBM® Flex System™ Fabric SI4093 System Interconnect Module enables simplified integration of IBM Flex System into your existing networking infrastructure. The SI4093 requires no management for most data center environments, eliminating the need to configure each networking device or individual ports, thus reducing the number of management points. It provides a low latency, loop-free interface that does not rely upon spanning tree protocols, thus removing one of the greatest deployment and management complexities of a traditional switch. The SI4093 offers administrators a simplified deployment experience while maintaining the performance of intra-chassis connectivity. The SI4093 System Interconnect Module is shown in Figure 1.

IBM Flex System Fabric EN4093 10Gb Scalable Switch
Figure 1. IBM Flex System Fabric SI4093 System Interconnect Module

Did you know?

Flexible port licensing for the SI4093 allows you to buy only the ports that you need, when you need them. The base module includes fourteen 10 GbE internal connections and ten 10 GbE uplinks. You then have the flexibility of turning on more 10 GbE internal links and more 10 GbE or 40 GbE uplinks when you need them by using IBM Features on Demand licensing capabilities that provide “pay as you grow” scalability.

The SI4093 provides transparent Flex System connectivity to your existing Cisco, Juniper, or other vendor network. The SI4093 aggregates compute node ports by appearing as a simple pass-thru device, and the upstream network sees a “large pipe” of server traffic coming to and from the chassis, with the main difference being that intra-chassis switching is supported. With the SI4093, your network administration team continues to use the same network management tools that are deployed in the network to manage the connectivity from the physical servers in the chassis to the upstream network.

With support for Converged Enhanced Ethernet (CEE), the SI4093 can be used as an FCoE transit device, in addition to being ideal for network-attached storage (NAS) and iSCSI environments.

Part number information

The SI4093 module is initially licensed for fourteen 10 Gb internal ports enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including 14 additional internal ports and two 40 Gb external uplink ports with Upgrade 1, and 14 additional internal ports and four additional SFP+ 10 Gb external ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied. Table 1 shows the part numbers for ordering the switches and the upgrades.

Table 1. Part numbers and feature codes for ordering

DescriptionPart numberFeature code
(x-config / e-config)
Interconnect module
IBM Flex System Fabric SI4093 System Interconnect Module95Y3313A45T / ESWA
Features on Demand upgrades
IBM Flex System Fabric SI4093 System Interconnect Module (Upgrade 1)95Y3318A45U / ESW8
IBM Flex System Fabric SI4093 System Interconnect Module (Upgrade 2)95Y3320A45V / ESW9

The base part number for the interconnect module includes the following items:
  • One IBM Flex System Fabric SI4093 System Interconnect Module
  • Important Notices Flyer
  • Warranty Flyer
  • Documentation CD-ROM

Note: SFP and SFP+ (small form-factor pluggable plus) transceivers or cables are not included with the switch. They must be ordered separately (see Table 2).

The interconnect module does not include a serial management cable. However, IBM Flex System Management Serial Access Cable, 90Y9338, is supported and contains two cables, a mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be used to connect to the interconnect module locally for configuration tasks and firmware updates.

The part numbers for the upgrades, 95Y3318 and 95Y3320, include the following items:
  • Feature on Demand Activation Flyer
  • Upgrade authorization key

The base switch and upgrades are as follows:
  • 95Y3313 is the part number for the physical device, and it comes with 14 internal 10 Gb ports enabled (one to each node bay) and ten external 10 Gb ports enabled for connectivity to an upstream network, plus external servers and storage. All external 10 Gb ports are SFP+ based connections.
  • 95Y3318 (Upgrade 1) can be applied on the base interconnect module to take full advantage of four-port adapters that are installed in each compute node. This upgrade enables 14 additional internal ports, for a total of 28 ports. The upgrade also enables two 40 Gb uplinks with QSFP+ connectors. These QSFP+ ports can also be converted to four 10 Gb SFP+ DAC connections by using the appropriate fan-out cable. This upgrade requires the base interconnect module.
  • 95Y3320 (Upgrade 2) can be applied on top of Upgrade 1 when you want more uplink bandwidth on the interconnect module or if you want additional internal bandwidth to the compute nodes with the adapters capable of supporting six ports (like CN4058). The upgrade enables the remaining four external 10 Gb uplinks with SFP+ connectors, plus 14 additional internal 10 Gb ports, for a total of 42 ports (three to each compute node).

Table 2 lists the supported port combinations on the interconnect module and the required upgrades.

Table 2. Supported port combinations
Supported port combinations
Quantity required
Base switch, 95Y3313Upgrade 1, 95Y3318Upgrade 2, 95Y3320
  • 14x internal 10 GbE
  • 10x external 10 GbE
1
0
0
  • 28x internal 10 GbE
  • 10x external 10 GbE
  • 2x external 40 GbE
1
1
0
  • 42x internal 10 GbE†
  • 14x external 10 GbE
  • 2x external 40 GbE
1
1
1
† This configuration uses six of the eight ports on the CN4058 adapter that are available for IBM Power Systems™ compute nodes.

Supported cables and transceivers

Table 3 lists the supported cables and transceivers.

Table 3. Supported transceivers and direct-attach cables

DescriptionPart numberFeature code
(x-config / e-config)
Serial console cables
IBM Flex System Management Serial Access Cable Kit90Y9338A2RR / None
SFP transceivers - 1 GbE
IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps)81Y16183268 / EB29
IBM SFP SX Transceiver81Y16223269 / EB2A
IBM SFP LX Transceiver90Y9424A1PN / ECB8
SFP+ transceivers - 10 GbE
IBM SFP+ SR Transceiver46C34475053 / EB28
IBM SFP+ LR Transceiver90Y9412A1PM / ECB9
10GBase-SR SFP+ (MMFiber) transceiver44W44084942 / 3282
SFP+ direct-attach cables - 10 GbE
1m IBM Passive DAC SFP+ Cable90Y9427A1PH / ECB4
1.5m IBM Passive DAC SFP+ Cable00AY764A51N / None
2m IBM Passive DAC SFP+ Cable00AY765A51P / None
3m IBM Passive DAC SFP+ Cable90Y9430A1PJ / ECB5
5m IBM Passive DAC SFP+ Cable90Y9433A1PK / ECB6
7m IBM Passive DAC SFP+ Cable00D6151A3RH / ECBH
QSFP+ transceiver and cables - 40 GbE
IBM QSFP+ 40GBASE-SR4 Transceiver
(Requires either cable 90Y3519 or cable 90Y3521)
49Y7884A1DR / EB27
10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884)90Y3519A1MM / EB2J
30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884)90Y3521A1MN / EC2K
IBM QSFP+ 40GBASE-LR4 Transceiver00D6222A3NY / None
QSFP+ breakout cables - 40 GbE to 4x10 GbE
1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable49Y7886A1DL / EB24
3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable49Y7887A1DM / EB25
5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable49Y7888A1DN / EB26
QSFP+ direct-attach cables - 40 GbE
1m IBM QSFP+ to QSFP+ Cable49Y7890A1DP / EB2B
3m IBM QSFP+ to QSFP+ Cable49Y7891A1DQ / EB2H
5m IBM QSFP+ to QSFP+ Cable00D5810A2X8 / ECBN
7m IBM QSFP+ to QSFP+ Cable00D5813A2X9 / ECBP

With the flexibility of the interconnect module, you can take advantage of the technologies that are required for multiple environments:
  • For 1 GbE links, you can use SFP transceivers plus RJ-45 cables or LC-to-LC fiber cables, depending on the transceiver.
  • For 10 GbE (supported on external SFP+ ports), you can use direct-attached copper (DAC) SFP+ cables for in-rack cabling and distances up to 7 m. These DAC cables have SFP+ connectors on each end, and they do not need separate transceivers. For longer distances, you can use optical transceivers. The 10GBASE-SR transceiver can support distances up to 300 m on OM3 multimode fiber with LC connectors. The 10GBASE-LR transceiver can support distances up to 10 km on single mode fiber with LC connectors.
  • For 40 GbE links (supported on QSFP+ ports), you can split each 40 GbE port into four 10 GbE ports using IBM QSFP+ DAC Breakout Cables for distances up to 5 m. For 40 GbE to 40 GbE connectivity, you can use affordable IBM QSFP+ to QSFP+ DAC cables for distances up to 7 m. For distances up to 100 m, the 40GBASE-SR4 QSFP+ transceiver can be used with OM3 multimode fiber with MTP connectors. For distances up to 10 km, the 40GBASE-LR4 QSFP+ transceiver can be used with single mode fiber with LC connectors.

Benefits

The SI4093 interconnect module is considered particularly suited for these clients:

  • Clients who want simple 10 GbE network connectivity from the chassis to the upstream network infrastructure, without the complexity of spanning tree and other advanced Layer 2 and Layer 3 features.
  • Clients who want to manage physical server connectivity in the chassis by using the existing network management tools.
  • Clients who require investment protection for 40 GbE uplinks.
  • Clients who want to reduce total cost of ownership (TCO) and improve performance, while maintaining high levels of availability and security.
  • Clients who want to avoid or minimize oversubscription, which can result in congestion and loss of performance.
  • Clients who want to implement a converged infrastructure with NAS, iSCSI, or FCoE. For FCoE implementations, the SI4093 passes through FCoE traffic upstream to other devices, such as the IBM RackSwitch™ G8264CS, Brocade VDX, or Cisco Nexus 5548/5596, where the FC traffic is broken out.

The switches offer the following key features and benefits:
  • Increased performance

    With the growth of virtualization and the evolution of cloud computing, many of today’s applications require low latency and high-bandwidth performance. The SI4093 supports submicrosecond latency and up to 1.28 Tbps throughput, while delivering full line rate performance. In addition to supporting 10 Gb ports, the SI4093 can also support 40 Gb uplink ports, thus enabling forward-thinking clients to connect to their advanced 40 Gb network or as investment protection for the future.

    The SI4093 also offers increased security and performance advantage when configured in VLAN-aware mode; it does not force communications upstream into the network, thus reducing latency and generating less network traffic.

  • Pay as you grow flexibility

    The SI4093 flexible port licensing allows you to buy only the ports that you need, when you need them. The base interconnect module configuration includes fourteen 10 GbE connections to the node bays and ten 10 GbE uplinks. You then have the flexibility of turning on more 10 GbE connections to the node bays and more 10 GbE or 40 GbE uplinks when you need them by using Features on Demand (FoD) licensing capabilities. FoD provides pay as you grow scalability without a need to buy additional hardware that consumes power and requires additional management.

  • Simplified network infrastructure

    The SI4093 simplifies deployment and growth because of its innovative scalable architecture. This architecture helps increase return on investment (ROI) by reducing the qualification cycle, while providing investment protection for additional I/O bandwidth requirements in the future. The extreme flexibility of the interconnect module comes from its ability to turn on additional ports as required, both down to the server and for upstream connections (including 40 GbE). Also, as you consider migrating to a converged LAN and SAN, the SI4093 can support the newest protocols, including Data Center Bridging/Converged Enhanced Ethernet (DCB/CEE), which can be leveraged in either an iSCSI, Fibre Channel over Ethernet (FCoE), or NAS converged environment.

    The default configuration of the SI4093 requires little or no management for most data center environments, eliminating the need to configure each device or individual ports, thus reducing the number of management points.

    Support for Switch Partition (SPAR) allows clients to virtualize the switch with partitions that isolate communications for multi-tenancy environments.

  • Transparent networking

    The SI4093 is a transparent network device, invisible to the upstream network, that eliminates network administration concerns of Spanning Tree Protocol configuration/interoperability, VLAN assignments, and avoidance of possible loops.

    By emulating a host NIC to the data center core, it accelerates the provisioning of VMs by eliminating the need to configure the typical access switch parameters.


Features and specifications

The IBM Flex System Fabric SI4093 System Interconnect Module has the following features and specifications:

  • Modes of operations
    • Transparent (or VLAN-agnostic) mode.

      In VLAN-agnostic mode (default configuration), the SI4093 transparently forwards VLAN tagged frames without filtering on the customer VLAN tag, providing an end host view to the upstream network. The interconnect module provides traffic consolidation in the chassis to minimize TOR port utilization, and it also enables server to server communication for optimum performance (for example, vMotion). It can be connected to the FCoE transit switch or FCoE gateway (FC Forwarder) device.

    • Local Domain (or VLAN-aware) mode.

      In VLAN-aware mode (optional configuration), the SI4093 provides additional security for multi-tenant environments by extending client VLAN traffic isolation to the interconnect module and its uplinks. VLAN-based access control lists (ACLs) can be configured on the SI4093. When FCoE is used, the SI4093 operates as an FCoE transit switch, and it should be connected to the FCF device.

  • Internal ports
    • Forty-two internal full-duplex 10 Gigabit ports. (Fourteen ports are enabled by default. Optional FoD licenses are required to activate the remaining 28 ports.)
    • Two internal full-duplex 1 GbE ports that are connected to the chassis management module.
  • External ports
    • Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ copper direct-attach cables (DAC). Ten ports are enabled by default. An optional FoD license is required to activate the remaining four ports. SFP+ modules and DACs are not included and must be purchased separately.
    • Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs. (Ports are disabled by default. An optional FoD license is required to activate them.) QSFP+ modules and DACs are not included and must be purchased separately.
    • One RS-232 serial port (mini-USB connector) that provides an additional means to configure the switch module.
  • Scalability and performance
    • 40 Gb Ethernet ports for extreme uplink bandwidth and performance.
    • External 10 Gb Ethernet ports to leverage 10 Gb upstream infrastructure.
    • Non-blocking architecture with wire-speed forwarding of traffic and aggregated throughput of 1.28 Tbps.
    • Media access control (MAC) address learning: automatic update, support for up to 128,000 MAC addresses.
    • Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total uplink bandwidth per interconnect module.
    • Support for jumbo frames (up to 9,216 bytes).
  • Availability and redundancy
    • Layer 2 Trunk Failover to support active/standby configurations of network adapter teaming on compute nodes.
    • Built-in link redundancy with loop prevention without a need for Spanning Tree protocol.
  • VLAN support
    • Up to 32 VLANs supported per interconnect module SPAR partition, with VLAN numbers 1 - 4095. (4095 is used for management module’s connection only.)
    • 802.1Q VLAN tagging support on all ports.
  • Security
    • VLAN-based access control lists (ACLs) (VLAN-aware mode).
    • Multiple user IDs and passwords.
    • User access control.
    • Radius, TACACS+, and LDAP authentication and authorization.
  • Quality of service (QoS)
    • Support for IEEE 802.1p traffic classification and processing.
  • Virtualization
    • Switch Independent Virtual NIC (vNIC2).
      • Ethernet, iSCSI, or FCoE traffic is supported on vNICs.
    • Switch partitioning (SPAR)
      • SPAR forms separate virtual switching contexts by segmenting the data plane of the switch. Data plane traffic is not shared between SPARs on the same switch.
      • SPAR operates as a Layer 2 broadcast network. Hosts on the same VLAN attached to a SPAR can communicate with each other and with the upstream switch. Hosts on the same VLAN but attached to different SPARs communicate through the upstream switch.
      • SPAR is implemented as a dedicated VLAN with a set of internal server ports and a single uplink port or link aggregation (LAG). Multiple uplink ports or LAGs are not allowed in SPAR. A port can be a member of only one SPAR.
  • Converged Enhanced Ethernet
    • Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow control to allow the switch to pause traffic based on the 802.1p priority value in each packet’s VLAN tag.
    • Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for allocating link bandwidth based on the 802.1p priority value in each packet’s VLAN tag.
    • Data Center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows neighboring network devices to exchange information about their capabilities.
  • Fibre Channel over Ethernet (FCoE)
    • FC-BB5 FCoE specification compliant.
    • FCoE transit switch operations.
    • FCoE Initialization Protocol (FIP) support.
  • Manageability
    • IPv4 and IPv6 host management.
    • Simple Network Management Protocol (SNMP V1, V2, and V3).
    • Industry standard command-line interface (IS-CLI) through Telnet, SSH, and serial port.
    • Secure FTP (sFTP).
    • Service Location Protocol (SLP).
    • Firmware image update (TFTP and FTP/sFTP).
    • Network Time Protocol (NTP) for clock synchronization.
    • IBM System Networking Switch Center (SNSC) support.
  • Monitoring
    • Switch LEDs for external port status and switch module status indication.
    • Change tracking and remote logging with syslog feature.
    • POST diagnostic tests.

Standards supported

The switches support the following standards:

  • IEEE 802.1AB Data Center Bridging Capability Exchange Protocol (DCBX)
  • IEEE 802.1p Class of Service (CoS) prioritization
  • IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled)
  • IEEE 802.1Qbb Priority-Based Flow Control (PFC)
  • IEEE 802.1Qaz Enhanced Transmission Selection (ETS)
  • IEEE 802.3 10BASE-T Ethernet
  • IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet
  • IEEE 802.3ad Link Aggregation Control Protocol
  • IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet
  • IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet
  • IEEE 802.3ap 10GBASE-KR backplane 10 Gb Ethernet
  • IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet
  • IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet
  • IEEE 802.3u 100BASE-TX Fast Ethernet
  • IEEE 802.3x Full-duplex Flow Control
  • IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet
  • IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet
  • SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable

Supported chassis and adapters

The I/O modules are installed in switch bays in the rear of the IBM Flex System Enterprise Chassis, as shown in Figure 2. I/O modules are normally installed in pairs because ports on the I/O adapters that are installed in the compute nodes are routed to two switch bays for redundancy and performance.

Location of the switch bays in the IBM Flex System Enterprise Chassis
Figure 2. Location of the switch bays in the IBM Flex System Enterprise Chassis

The connections between the adapters that are installed in the compute nodes to the switch bays in the chassis are shown in Figure 3. The figure shows both half-wide servers, such as the x240 with two adapters, and full-wide servers, such as the x440 with four adapters.

 Logical layout of the interconnects between I/O adapters and I/O modules
Figure 3. Logical layout of the interconnects between I/O adapters and I/O modules

The SI4093 interconnect modules can be installed in bays 1, 2, 3, and 4 of the Enterprise chassis. A supported adapter must be installed in the corresponding slot of the compute node (slot A1 when interconnect modules are installed in bays 1 and 2 or slot A2 when the modules are in bays 3 and 4). With four-port adapters, an optional Upgrade 1 (90Y3562) is required for the interconnect module to allow communications on all four ports. With eight-port adapters, both optional Upgrade 1 (90Y3562) and Upgrade 2 (88Y6037) are required for the interconnect module to allow communications on six adapter ports, and the two remaining ports are not used.

In compute nodes that have an integrated dual-port 10 GbE network interface controller (NIC), NIC ports are routed to bays 1 and 2 with a specialized periscope connector, and the adapter in slot A1 is not required. However, when needed, the periscope connector can be replaced with the adapter. In such a case, integrated NIC is disabled.

Table 5 shows the connections between adapters that are installed in the compute nodes to the switch bays in the chassis.

Table 5. Adapter to I/O bay correspondence

I/O adapter slot
in the server
Port on the adapterCorresponding I/O bay in the chassis
Bay 1Bay 2Bay 3Bay 4
Slot 1Port 1Yes
Port 2Yes
Port 3*Yes
Port 4*Yes
Port 5**Yes
Port 6**Yes
Port 7#
Port 8#
Slot 2Port 1Yes
Port 2Yes
Port 3*Yes
Port 4*Yes
Port 5**Yes
Port 6**Yes
Port 7#
Port 8#
Slot 3
(full-wide compute nodes only)
Port 1Yes
Port 2Yes
Port 3*Yes
Port 4*Yes
Port 5**Yes
Port 6**Yes
Port 7#
Port 8#
Slot 4
(full-wide compute nodes only)
Port 1Yes
Port 2Yes
Port 3*Yes
Port 4*Yes
Port 5**Yes
Port 6**Yes
Port 7#
Port 8#
* Ports 3 and 4 require Upgrade 1 of the SI4093.
** Ports 5 and 6 require Upgrade 2 of the SI4093.
# Ports 7 and 8 are routed to I/O bays 1 and 2 (Slot 1 and Slot 3) or 3 and 4 (Slot 2 and Slot 4), but these ports cannot be used with the SI4093.

Table 6 lists the I/O adapters that are supported by the SI4093.

Table 6. Supported network adapters
DescriptionPart
number
Feature code
(x-config / e-config)
SI4093
40 Gb Ethernet
IBM Flex System EN6132 2-port 40Gb Ethernet Adapter90Y3482A3HK / A3HKNo
10 Gb Ethernet
Embedded 10Gb Virtual Fabric Adapter (2-port)NoneNone / NoneYes*
IBM Flex System CN4022 2-port 10Gb Converged Adapter88Y5920A4K3 / A4K3Yes
IBM Flex System CN4054 10Gb Virtual Fabric Adapter (4-port)90Y3554A1R1 / NoneYes
IBM Flex System CN4054R 10Gb Virtual Fabric Adapter (4-port)00Y3306A4K2 / A4K2Yes
IBM Flex System CN4058 8-port 10Gb Converged AdapterNoneNone / EC24Yes
IBM Flex System EN4054 4-port 10Gb Ethernet AdapterNoneNone / 1762Yes
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter90Y3466A1QY / EC2DYes
IBM Flex System EN4132 2-port 10Gb RoCE AdapterNoneNone / EC26Yes
1 Gb Ethernet
Embedded 1 Gb Ethernet controller (2-port)**NoneNone / NoneYes
IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 49Y7900A10Y / 1763Yes
* The Embedded 10Gb Virtual Fabric Adapter is built into x222 nodes and certain models of the x240 and x440 nodes. x222 nodes require Upgrade 1 to be applied to the SI4093 module to enable network connectivity..
** The Embedded 1 Gb Ethernet controller is built into x220 nodes.

The adapters are installed in slots in each compute node. Figure 4 shows the locations of the slots in the x240 Compute Node. The positions of the adapters in the other supported servers are similar.

 Location of the I/O adapter slots in the IBM Flex System x240 Compute Node
Figure 4. Location of the I/O adapter slots in the IBM Flex System x240 Compute Node

Connectors and LEDs

Figure 5 shows the front panel of the IBM Flex System Fabric SI4093 System Interconnect Module.

Front panel of the IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switches
Figure 5. Front panel of the IBM Flex System Fabric SI4093 System Interconnect Module

The front panel contains the following components:

  • LEDs that display the status of the interconnect module and the network:
    • The OK LED indicates that the interconnect module passed the power-on self-test (POST) with no critical faults and is operational.
    • Identify: This blue LED can be used to identify the module physically by illuminating it through the management software.
    • The error LED (switch module error) indicates that the module failed the POST or detected an operational fault.
  • One mini-USB RS-232 console port that provides an additional means to configure the interconnect module. This mini-USB-style connector enables connection of a special serial cable. (The cable is optional and it is not included with the interconnect module. For more information, see the "Part number information" section.)
  • Fourteen external SFP+ ports for 1 Gb or 10 Gb connections to external Ethernet devices.
  • Two external QSFP+ port connectors to attach QSFP+ modules or cables for a single 40 Gb uplink per port or for splitting of a single port into 4x 10 Gb connections to external Ethernet devices.
  • An Ethernet link OK LED and an Ethernet Tx/Rx LED for each external port on the interconnect module.

Network cabling requirements

The network cables that can be used with the SI4093 are shown in Table 7.

Table 7. SI4093 network cabling requirements

TransceiverStandardCableConnector
40 Gb Ethernet
IBM QSFP+ 40GBASE-SR4 Transceiver (49Y7884)40GBASE-SR4IBM MTP fiber optics cables up to 30 m (see Table 3)MTP
IBM QSFP+ 40GBASE-LR4 Transceiver (00D6222)40GBASE-LR41310 nm single-mode fiber cable up to 10 kmLC
Direct attach cable40GBASE-CR4QSFP+ to QSFP+ DAC cables up to 7 m (see Table 3)QSFP+
10 Gb Ethernet
IBM SFP+ SR Transceiver (46C3447)10GBASE-SR850 nm multimode fiber cable (50 µ or 62.5 µ) up to 300 mLC
IBM SFP+ LR Transceiver (90Y9412)10GBASE-LR1310 nm single-mode fiber cable up to 10 kmLC
10GBase-SR SFP+ Transceiver (44W4408)10GBASE-SR850 nm multimode fiber cable (50 µ or 62.5 µ) up to 300 mLC
Direct attach cable10GSFP+CuSFP+ DAC cables up to 7 m (see Table 3)SFP+
1 Gb Ethernet
IBM SFP RJ-45 Transceiver (81Y1618)1000BASE-TUTP Category 5, 5E, and 6 up to 100 metersRJ-45
IBM SFP SX Transceiver (81Y1622)1000BASE-SX850 nm multimode fiber cable (50 µ or 62.5 µ) up to 550 mLC
IBM SFP LX Transceiver (90Y9424)1000BASE-LX1310 nm single-mode fiber cable up to 10 kmLC
Management ports
External 1 GbE management port1000BASE-TUTP Category 5, 5E, and 6 up to 100 metersRJ-45
External RS-232 management portRS-232DB-9-to-mini-USB or RJ-45-to-mini-USB console cable (comes with optional Management Serial Access Cable, 90Y9338)Mini-USB

Warranty

The SI4093 carries a 1-year, customer-replaceable unit (CRU) limited warranty. When installed in a chassis, these I/O modules assume your system’s base warranty and any IBM ServicePac® upgrade.

Physical specifications

Here are the approximate dimensions and weight of the SI4093:

  • Height: 30 mm (1.2 in.)
  • Width: 401 mm (15.8 in.)
  • Depth: 317 mm (12.5 in.)
  • Weight: 3.7 kg (8.1 lb)

Shipping dimensions and weight (approximate):
  • Height: 114 mm (4.5 in.)
  • Width: 508 mm (20.0 in.)
  • Depth: 432 mm (17.0 in.)
  • Weight: 4.1 kg (9.1 lb)

Agency approvals

The SI4093 conforms to the following regulations:

  • United States FCC 47 CFR Part 15, Subpart B, ANSI C63.4 (2003), Class A
  • IEC/EN 60950-1, Second Edition
  • Canada ICES-003, issue 4, Class A
  • Japan VCCI, Class A
  • Australia/New Zealand AS/NZS CISPR 22:2006, Class A
  • Taiwan BSMI CNS13438, Class A
  • CE Mark (EN55022 Class A, EN55024, EN61000-3-2, EN61000-3-3)
  • CISPR 22, Class A
  • China GB 9254-1998
  • Turkey Communiqué 2004/9; Communiqué 2004/22
  • Saudi Arabia EMC.CVG, 28 October 2002

Typical configurations

The most common SI4093 connectivity topology, which can be used with both IBM and non-IBM upstream network devices, is shown in Figure 6.

SI4093 connectivity topology: Link Aggregation
Figure 6. SI4093 connectivity topology - Link Aggregation

In this loop-free redundant topology, each SI4093 is physically connected to a separate Top-of-Rack (ToR) switch with static or LACP aggregated links.

When the SI4093 is used with the IBM RackSwitch switches, Virtual Link Aggregation Groups (vLAGs) can be used for load balancing and redundancy purposes. The virtual link aggregation topology is shown in Figure 7.

SI4093 connectivity topology - Virtual Link Aggregation
Figure 7. SI4093 connectivity topology - Virtual Link Aggregation

In this loop-free topology, aggregation is split between two physical switches, which appear as a single logical switch, and each SI4093 is connected to both ToR switches through static or LACP aggregated links.

Dual isolated SAN fabrics: If you plan to use FCoE and follow a dual isolated SAN fabric design approach (also known as SAN air gaps), consider the SI4093 connectivity topology shown in Figure 6 (Link Aggregation).

The following usage scenarios are considered:

  • SI4093 in the traditional 10 Gb Ethernet network
  • SI4093 in the converged FCoE network

SI4093 in the traditional 10 Gb Ethernet network

In the traditional 10 GbE network, the SI4093 can be used together with pNIC or the switch-independent vNIC capabilities of the 10 Gb Virtual Fabric Adapters (VFAs) that are installed in each compute node. In the case of vNIC, each physical port on the adapter is split into four virtual NICs (vNICs) with a certain amount of bandwidth that is assigned. vNIC bandwidth allocation and metering is performed by the VFA, and a unidirectional virtual channel of an assigned bandwidth is established between the I/O module and the VFA for each vNIC. Up to 32 vNICs can be configured on a half-wide compute node.

The SI4093 interconnect modules is connected to the ToR aggregator switches, such as the following ones:
  • IBM RackSwitch G8264 through the 10 GbE uplinks
  • IBM RackSwitch G8316/G8332 through the 40 GbE uplinks

Figure 8 illustrates this scenario.

SI4093 in the 10 GbE network
Figure 8. SI4093 in the 10 GbE network

The solution components that are used in the scenario that is shown in Figure 8 are listed in Table 8.

Table 8. Components that are used in 10 GbE solution with the SI4093 (Figure 8)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System Virtual Fabric solution
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per server
IBM Flex System Fabric SI4093 System Interconnect Module95Y33132 per chassis
IBM Flex System Fabric SI4093 System Interconnect Module (Upgrade 1)95Y33181 per SI4093
2IBM RackSwitch G8264, G8316, or G8332

Note: You also need SFP+/QSFP+ modules and optical cables or SFP+/QSFP+ DAC cables (not shown in Table 8; see Table 3 for details) for the external 10 Gb Ethernet connectivity.

SI4093 in the converged FCoE network

SI4093 supports Data Center Bridging (DCB), and it can transport FCoE frames. These interconnect modules provide an inexpensive solution for transporting encapsulated FCoE packets to the Fibre Channel Forwarder (FCF), which is functioning as both an aggregation switch and an FCoE gateway. Vendor-specific examples of this scenario are shown in Figure 9, Figure 10, and Figure 11.

SI4093 in the FCoE network with the IBM RackSwitch G8264CS as an FCF
Figure 9. SI4093 in the FCoE network with the IBM RackSwitch G8264CS as an FCF

SI4093 in the FCoE network with the Brocade VDX 6730 as an FCF
Figure 10. SI4093 in the FCoE network with the Brocade VDX 6730 as an FCF

SI4093 in the FCoE network with the Cisco Nexus 5548/5596 as an FCF
Figure 11. SI4093 in the FCoE network with the Cisco Nexus 5548/5596 as an FCF

The solution components that are used in the scenarios that are shown in Figure 9, Figure 10, and Figure 11 are listed in Table 9, Table 10, and Table 11, respectively.

Table 9. SI4093 with the IBM G8264CS as an FCF (Figure 9)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System FCoE solution
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per server
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Fabric SI4093 System Interconnect Module95Y33132 per chassis
IBM Flex System Fabric SI4093 System Interconnect Module (Upgrade 1)95Y33181 per SI4093
2IBM RackSwitch G8264CS
IBM B-type, Brocade, or Cisco MDS SAN fabric
IBM System Storage® FC disk controllers
IBM System Storage DS3000 / DS5000
IBM System Storage DS8000®
IBM Storwize® V7000 / SAN Volume Controller
IBM XIV®

Table 10. SI4093 with the Brocade VDX 6730 as an FCF (Figure 10)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System FCoE solution
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per server
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Fabric SI4093 System Interconnect Module95Y33132 per chassis
IBM Flex System Fabric SI4093 System Interconnect Module (Upgrade 1)95Y33181 per SI4093
2Brocade VDX 6730 Converged Switch for IBM
IBM B-type SAN fabric
IBM System Storage FC disk controllers
IBM System Storage DS3000 / DS5000
IBM System Storage DS8000®
IBM Storwize V7000 / SAN Volume Controller
IBM XIV

Table 11. SI4093 with the Cisco Nexus 5548/5596 as an FCF (Figure 11)
Diagram
reference
DescriptionPart
number
Quantity
1IBM Flex System FCoE solution
IBM Flex System CN4054 10Gb Virtual Fabric Adapter90Y35541 per server
IBM Flex System CN4054 Virtual Fabric Adapter Upgrade90Y35581 per VFA
IBM Flex System Fabric SI4093 System Interconnect Module95Y33132 per chassis
IBM Flex System Fabric SI4093 System Interconnect Module (Upgrade 1)95Y33181 per SI4093
2Cisco Nexus 5548/5596 Switch
Cisco MDS SAN fabric
IBM System Storage FC disk controllers
IBM Storwize V7000 / SAN Volume Controller

Note: You also need SFP+ modules and optical cables or SFP+ DAC cables (not shown in Table 9, Table 10, and Table 11; see Table 3 for details) for the external 10 Gb Ethernet connectivity.

IBM provides extensive FCoE testing to deliver network interoperability. For a full listing of IBM supported FCoE and iSCSI configurations, see the System Storage Interoperation Center (SSIC) website at:
http://ibm.com/systems/support/storage/ssic

Related publications and links

For more information, see the following IBM Flex System Fabric SI4093 System Interconnect Module product publications, which are available from the IBM Flex System Information Center at
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp:

  • Installation Guide
  • Application Guide
  • Command Reference

Here are other useful references:

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.

Profile

Publish Date
06 August 2013

Last Update
10 March 2014


Rating: Not yet rated


Author(s)

IBM Form Number
TIPS1045