RackSwitch G8264CS

Product Guide

Lenovo is open and ready for business with System x
IBM x86 products are now products of Lenovo in the U.S. and other countries. IBM will continue to host x86-related content on ibm.com until migrated to Lenovo.
Learn more about the acquisition

Abstract

Many clients successfully use Ethernet and Fibre Channel connectivity from their servers to their LAN and SAN. These clients are seeking ways to reduce the cost and complexity of these environments by using the capabilities of Ethernet and Fibre Channel convergence.

The RackSwitch™ G8264CS top-of-rack switch offers the benefits of a converged infrastructure. As part of its forward thinking design, this switch has flexibility for future growth and expansion. This switch is ideal for clients who are looking to connect to existing SANs and clients who want native Fibre Channel connectivity, in addition to support for such protocols as Ethernet, Fibre Channel over Ethernet (FCoE), and iSCSI.

Changes in the November 12 update:
--Added the 8Gb SFP+ FW Optical Transceiver to the table on Page 9.
--Updated Figures 5-11.
--Removed references to IBM and IBM System Networking

Contents


Many clients successfully use Ethernet and Fibre Channel connectivity from their servers to their LAN and SAN. These clients are seeking ways to reduce the cost and complexity of these environments by using the capabilities of Ethernet and Fibre Channel convergence.

The RackSwitch™ G8264CS top-of-rack switch offers the benefits of a converged infrastructure. As part of its forward thinking design, this switch has flexibility for future growth and expansion. This switch is ideal for clients who are looking to connect to existing SANs and clients who want native Fibre Channel connectivity, in addition to support for such protocols as Ethernet, Fibre Channel over Ethernet (FCoE), and iSCSI.

The RackSwitch G8264CS includes the following highlights:
  • Lossless Ethernet, Fibre Channel, and FCoE in one switch
  • 36 SFP+ ports supporting 1-Gb or 10-Gb Ethernet
  • Flexibility with 12 Omni Ports that support 10-Gb Ethernet or 4/8 Gb Fibre Channel connections
  • Ideal for clients looking to aggregate FC or FCoE traffic with the ability to connect to existing SANs
  • Future-proofed with four 40-Gb QSFP+ ports
  • Low cost, low complexity, and simpler deployment and management

Figure 1 shows the RackSwitch G8264CS top-of-rack switch.


Figure 1. RackSwitch G8264CS


Did you know?

The RackSwitch G8264CS simplifies deployment with its innovative and flexible Omni Port technology. The 12 Omni Ports on the G8264CS give clients the flexibility to choose a 10-Gb Ethernet, 4/8 Gb Fibre Channel, or both for upstream connections. In FC mode, Omni Ports provide convenient access to FC storage. The Omni Port technology that is provided on the G8264CS helps consolidate enterprise storage, networking, data, and management onto a simple-to-manage, efficient, and cost-effective single fabric. Also, the G8264CS can be used to create 252 node POD’s or clusters with Flex System Interconnect Fabric.


Part number information

Table 1 shows the part numbers for ordering the switch.

Table 1. Part numbers and feature codes for ordering
DescriptionSystem x®
part number
IBM Power Systems™
RackSwitch G8264CS (Rear-to-Front)7309DRX1455-64F
RackSwitch G8264CS (Front-to-Rear)7309DFXN/A


Two models are available, either rear-to-front air flow or front-to-rear air flow:
  • Rear-to-front model: Ports are in the rear of the rack and are suitable for use with System x, ThinkServer, Power Systems, Flex System™ or IBM PureSystems™, and BladeCenter® designs.
  • Front-to-rear model: Ports are in the front of the rack and are suitable for use with iDataPlex® systems and NeXtScale systems.

The base module part numbers include the following items:
  • One RackSwitch G8264CS
  • Generic Rail Mount Kit (2-post)
  • Serial cable

The 2-post Rack Installation Kit allows the switch to be mounted vertically or horizontally. An optional 4-post rail kit is also available (see Table 2).

Transceivers are not included in the purchase of the switch, and all transceivers require the appropriate cables (see Table 2). Power cords are also not included and must be ordered separately (see Table 3).


Features and benefits

The traditional approach of segmenting storage and data traffic has certain advantages, such as traffic isolation and independent administration. Nevertheless, it also poses several disadvantages, including higher infrastructure costs, complexity of management, and under-utilization of resources. Clients must invest in separate infrastructures for LAN, SAN, and interprocess communications (IPC) fabrics, including host adapters, cables, switching, routers, and other device-specific equipment.

The RackSwitch G8264CS offers the following features and benefits:
  • Lowers the total cost of ownership (TCO) with consolidation

    By consolidating LAN and SAN networks and converging to a single fabric, clients can reduce the equipment that is needed in their data centers. This benefit significantly reduces the costs that are associated with energy and cooling, management and maintenance, and capital costs.

  • Improves performance and increases availability

    The G8264CS is an enterprise-class and full-featured data center switch that offers high-bandwidth performance with thirty-six 1/10 Gb SFP+ connections, 12 Omni Ports that can be used for 10-Gb SFP+ connections, 4/8 Gb Fibre Channel connections, or both, plus four 40 Gb QSFP+ connections. The G8264CS switch delivers full line rate performance on Ethernet ports, making it an ideal choice for managing dynamic workloads across the network. This switch also provides a rich Layer 2 and Layer 3 feature set that is ideal for many of today’s data centers. Combined with redundant hot-swappable power and fans, along with numerous high availability features, this switch comes fully equipped to handle the demands of business-sensitive traffic.

  • High performance

    The 10-Gb/40-Gb switch provides the best combination of low latency, non-blocking line-rate switching, and ease of management. It has a throughput of up to 1.28 Tbps.

  • Lower power and better cooling

    The G8264CS uses as little as 330 W of power, which is a fraction of the power consumption of most competitive offerings. Unlike side-cooled switches, which can cause heat recirculation and reliability concerns, the front-to-rear or rear-to-front cooling design of the G8264CS switch reduces the costs of data center air conditioning by having airflow match the servers in the rack. In addition, variable speed fans help to automatically reduce power consumption.

  • Support for Virtual Fabric

    The G8264CS can help customers address I/O requirements for multiple NICs while reducing cost and complexity. By using Virtual Fabric, you can carve a physical dual-port NIC into multiple vNICs (between 2 - 8 vNICs) and to create a virtual pipe between the adapter and the switch for improved performance, availability, and security. It is also important to know support for FCoE, as 2 vNIC cans be configured as CNAs to allow for additional cost savings through convergence.

  • VM-aware networking

    VMready software on the switch simplifies configuration and improves security in virtualized environments. VMready automatically detects virtual machine movement between physical servers and instantly reconfigures the network policies of each VM across VLANs to keep the network up and running without interrupting traffic or impacting performance. VMready works with all leading VM providers, such as VMware, Citrix, Xen, IBM PowerVM®, and Microsoft.

  • Layer 3 functionality

    The G8264CS includes Layer 3 functionality, which provides security and performance benefits, because inter-VLAN traffic stays within the switch. This switch also provides the full range of Layer 3 protocols from static routes for technologies, such as Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) for enterprise customers.

  • Seamless interoperability

    RackSwitch switches perform seamlessly with other vendors' upstream switches.

  • Fault tolerance

    RackSwitch switches learn alternative routes automatically and perform faster convergence in the unlikely case of a link, switch, or power failure. The switch uses proven technologies, such as L2 trunk failover, advanced VLAN-based failover, VRRP, and HotLink.

  • Multicast support

    These switches support IGMP Snooping v1, v2, and v3 with 2 K IGMP groups. They also support Protocol Independent Multicast (PIM), such as PIM Sparse Mode or PIM Dense Mode.

  • Transparent networking capability

    With a simple configuration change to easy connect mode, the RackSwitch G8264CS becomes a transparent network device, invisible to the core, eliminating network administration concerns of Spanning Tree Protocol configuration/interoperability and VLAN assignments, and avoids any possible loops. By emulating a host NIC to the data center core, it accelerates the provisioning of VMs by eliminating the need to configure the typical access switch parameters.



Connections and LEDs

Figure 2 shows the front of the RackSwitch G8264CS.

Figure 2. RackSwitch G8264CS

The switch has the following interfaces:

  • 36 SFP+ ports (1-Gb or 10-Gb Ethernet)
  • 12 Omni Ports (10-Gb Ethernet or 4/8 Gb Fibre Channel)
  • Four QSFP+ ports (40 Gb Ethernet)
  • One 10/100/1000 Ethernet RJ45 port for out-of-band management
  • One USB port for mass storage device connection
  • One mini-USB Console port for serial access
  • Server-like port orientations, enabling short and simple cabling

The RackSwitch G8264CS has the following status LEDs:
  • System LEDs to indicate link status, fan, power, and stacking.
  • Two LEDs per Omni Port for link status, activity, and mode (ETH/FC mode).
  • One LED per SFP+ port that corresponds to the Link and Activity LED described previously.
  • Four LEDs per QSFP+ port for link status and activity; separate LED activity for each 10G subinterface.
  • Blue Service Required LED, which is labeled as “!” on the front panel.
  • Green Stack Master LED, which is labeled as “M” on the front panel.
  • Fan module LED: Solid green indicates functional, and flashing green indicates a problem.
  • Power Supply module LED: Solid green indicates functional, and flashing green indicates a problem.


Software features and specifications

The software features, as of Version 6.8 plus Fibre Channel, are as follows:

Security
  • RADIUS
  • TACACS+
  • SCP
  • Wire Speed Filtering: Allow and Deny
  • SSH v1, v2
  • HTTPS Secure BBI
  • Secure interface login and password
  • MAC address move notification
  • Shift B Boot menu (Password Recovery/ Factory Default)

VLANs
  • Port-based VLANs
  • 4096 VLAN ids supported
  • 2048 Active VLANs (802.1Q)
  • 802.1x with Guest VLAN
  • Private VLAN Edge

Lossless Ethernet
  • 802.1 Data Center Bridging
  • Priority Based Flow Control (PFC)
  • Enhanced Transmission Selection (ETS)
  • Data Center Bridge Exchange protocol (DCBX)
  • FIP Snooping
  • Converged Enhanced Ethernet

Fibre Channel/FCoE
  • NPV Gateway
    • FC Port Speeds: 4/8 Gb.
    • Bridging to Fibre Channel SANs.
    • Login load distribution.
    • End-to-end FCoE (initiator to target). FCoE initiator /target can be attached to any port that is configured as Ethernet.
    • Direct attachment of FCoE targets.
    • Manageable by using ISCLI or BBI .
  • Full Fabric FC/FCoE
    • FC Port Speeds: 4/8 Gb.
    • FC-BB-5 Compliant Full Fabric FC/FCoE.
    • Fabric Services: Name Server, Registered State Change Notification (RSCN), Login services, Zoning
    • WWN, FCID, or Alias based Zoning
    • Login load distribution
    • FC classes of service: Class 2 and Class 3.
    • Manageable through ISCLI / BBI.
  • Supported Protocols
    • Fibre Channel FCoE
    • T11 FCoE Initialization Protocol (FIP) (FC-BB-5)
    • Fibre Channel forwarding (FCF)
    • Port Types Fibre Channel: F, NP and VF and E (E with release OS 7.8)
    • 16 Buffer credits supported
    • Fabric Device Management Interface (FDMI)
    • NPIV support
    • NPV gateway
    • Fabric Shortest Path First (FSPF)
    • Port security
    • Fibre Channel ping, debugging
  • Fibre Channel Standards
    • FC-PH, Revision 4.3 (ANSI/INCITS 230-1994)
    • FC-PH, Amendment 1 (ANSI/INCITS 230-1994/AM1 1996)
    • FC-PH, Amendment 2 (ANSI/INCITS 230-1994/AM2-1999)
    • FC-PH-2, Revision 7.4 (ANSI/INCITS 297-1997)
    • FC-PH-3, Revision 9.4 (ANSI/INCITS 303-1998)
    • FC-PI, Revision 13 (ANSI/INCITS 352-2002)
    • FC-PI-2, Revision 10 (ANSI/INCITS 404-2006)
    • FC-PI-4, Revision 7.0
    • FC-FS, Revision 1.9 (ANSI/INCITS 373-2003)
    • FC-FS-2, Revision 0.91
    • FC-FS-3, Revision 1.11
    • FC-LS, Revision 1.2
    • FC-SW-2, Revision 5.3 (ANSI/INCITS 355-2001)
    • FC-SW-3, Revision 6.6 (ANSI/INCITS 384-2004)
    • FC-SW-5, Revision 8.5 (ANSI/INCITS 461-2010)
    • FC-GS-3, Revision 7.01 (ANSI/INCITS 348-2001)
    • FC-GS-4, Revision 7.91 (ANSI/INCITS 387-2004)
    • FC-GS-6, Revision 9.4,(ANSI/INCITS 463-2010)
    • FC-BB-5, Revision 2.0 for FCoE
    • FCP, Revision 12 (ANSI/INCITS 269-1996)
    • FCP-2, Revision 8 (ANSI/INCITS 350-2003)
    • FCP-3, Revision 4 (ANSI/INCITS 416-2006)
    • FC-MI, Revision 1.92 (INCITS TR-30-2002, except for FL-ports and Class 2)
    • FC-MI-2, Revision 2.6 (INCITS TR-39-2005)
    • FC-SP, Revision 1.6
    • FC-DA, Revision 3.1 (INCITS TR-36-2004)

Trunking
    • LACP
    • Static Trunks (Etherchannel)
    • vLAG
    • Configurable Trunk Hash algorithm

Spanning Tree
    • Multiple Spanning Tree (802.1 s)
    • Rapid Spanning Tree (802.1 w)
    • PVRST+
    • Fast Uplink Convergence
    • BPDU guard

Quality of service
    • QoS 802.1p (priority queues)
    • DSCP remarking
    • Metering

Routing protocols
    • RIP v1/v2
    • OSPF
    • BGP

High availability
    • Layer 2 failover
    • HotLinks
    • Virtual Router Redundancy support (VRRP)

Multicast
    • IGMP Snooping v1, v2, and v3 with 2 K IGMP groups
    • Protocol Independent Multicast (PIM Sparse Mode/Dense Mode)

Monitoring
    • Port mirroring
    • ACL-based mirroring
    • sFlow Version 5

Virtualization
    • VMready with VI API support
    • vNIC MIB support for SNMP

User interfaces
    • System Networking Switch Center (SNSC)
    • ISCLI (Cisco or similar)
    • Scriptable CLI
    • Browser-based client (BBCLI) or Telnet

Standard protocols
    • IPv4, IPv6
    • SNMP v1, v2c, and v3
    • RMON
    • Secondary NTP Support
    • DHCP Client
    • DHCP Relay
    • LLDP
    • 128 K MAC Table
    • 9 K Jumbo Frames
    • 802.3X Flow Control

Upgrades
    • Upgrade firmware by using serial or TFTP
    • Dual software images


Supported transceivers and cables

Table 2 lists the supported transceivers and cables.

Table 2. Transceivers and cables
DescriptionSystem x
part number
IBM Power Systems MTM
1 Gb options (SFP+ ports only)
SFP RJ45 Transceiver (Withdrawn)81Y1618
SFP RJ45 Transceiver00FE333EB29
SFP SX Transceiver 81Y1622EB2A
SFP LX Transceiver 90Y9424ECB8
0.6 m Blue Cat5e Cable40K5679ECB0
1.5 m Blue Cat5e Cable40K8785ECB2
3 m Blue Cat5e Cable40K55811111
10 m Blue Cat5e Cable40K89271112
25 m Blue Cat5e Cable40K89301113
10 Gb options (SFP+ or Omni Ports)
SFP+ SR Transceiver46C3447EB28
SFP+ LR Transceiver00D6180
SFP+ ER Transceiver 90Y9415ECBA
1m LC-LC Fiber Cable (networking) - Optical88Y6851ECBC
5m LC-LC Fiber Cable (networking) - Optical88Y6854ECBN
25m LC-LC Fiber Cable (networking) - Optical88Y6857ECBE
0.5 m Passive DAC SFP+ Cable **00D6288ECBG
1 m Passive DAC SFP+ Cable **90Y9427ECB4
1.5 m Passive DAC SFP+ Cable **00AY764
2 m Passive DAC SFP+ Cable **00AY765
3 m Passive DAC SFP+ Cable **90Y9430ECB5
5 m Passive DAC SFP+ Cable **90Y9433ECB6
7 m Passive DAC SFP+ Cable **00D6151*ECBH
1 m Active DAC SFP+ Cable 95Y0323
3 m Active DAC SFP+ Cable 95Y0326
5 m Active DAC SFP+ Cable 95Y0329
1m 10GbE Cable SFP+ Active TwinaxEN01
3m 10GbE Cable SFP+ Active TwinaxEN02
5m 10GbE Cable SFP+ Active TwinaxEN03
40 Gb options (QSFP+ ports only)
QSFP+ SR4 Transceiver49Y7884 EB27
QSFP+ eSR4 Transceiver00FE325
10 m QSFP+ MTP Optical Cable (for SR4 Transceiver)90Y3519EB2J
30 m QSPF+ MTP Optical Cable (for SR4 Transceiver)90Y3521EB2K
1m MTP-4xLC OM3 MMF Breakout Cable†00FM412
3m MTP-4xLC OM3 MMF Breakout Cable†00FM413
5m MTP-4xLC OM3 MMF Breakout Cable†00FM414
QSFP+ LR4 Transceiver ***00D6222
1 m QSFP+ DAC Break Out Cable **49Y7886 EB24
3 m QSFP+ DAC Break Out Cable **49Y7887EB25
5 m QSFP+ DAC Break Out Cable **49Y7888 EB26
1 m QSFP+-to-QSFP+ Cable49Y7890EB2B
3 m QSFP+-to-QSFP+ Cable49Y7891EB2H
5 m QSFP+-to-QSFP+ Cable00D5810ECBN
7 m QSFP+-to-QSFP+ Cable00D5813ECBP
Fibre Channel options (Omni Ports only)
8Gb SFP+ SW Optical Transceiver ****44X19643286
8Gb SFP+ FW Optical Transceiver ****00FM472
Spare Options (Base already includes all that you need)
Hot-Swappable, Rear-to-Front Fan Assembly Spare 88Y6026Sold as FRU
Hot-Swappable, Front-to-Rear Fan Assembly Spare49Y7939Sold as FRU
Hot-Swappable, Front-to-Rear 550W CFF Power Supply Spare 00D5961Sold as FRU
Hot-Swappable, Rear-to-Front 750W CFF Power Supply Spare00D5858Sold as FRU
Console Cable Kit Spare90Y9462EUC4
Miscellaneous options
Adjustable 19" 4 Post Rail Kit00D6185EU27
Recessed 19" 4 Post Rail Kit (NeXtScale)00CG089
Air Inlet Duct for 483 mm RackSwitch*****00D6060
* Use of the 7m Passive DAC cable is restricted to 10 Gb SFP+ ports only.
** QSFP+ to QSFP+ cables and QSFP+ DAC break out cables are not supported for Power Systems 10Gb NICs connectivity. Used for switch to switch connectivity only.
*** LR4 transceiver up to 10KM over LC-LC single mode fibre (SMF).
**** Support 4/8 Gbps when a multi-rate FC transceiver is plugged in via auto detect.
***** Requires Adjustable 19" 4 Post Rail Kit
† The QSFP+ eSR4 and SR4* transceivers can leverage these cables which breakout into four OM3 multimode fiber (MMF) cables with LC duplex connectors to support up to 100m (SR4), 300m eSR4). Valid Configuration: 1) MTP-4xLC <=== up to 100m (OM3) with SR4 or 300m with eSR4 ===> 4xLC-MTP 2) MTP-4xLC (connected to 4x SFP+ SR) up to 300m (OM3) using QSFP+ eSR4 only, * This configuration isn't supported with QSFP+ SR4.

Figure 3 shows the transceiver and cable options and their supported distances.


Figure 3. Transceiver and cable options and supported distances

Figure 4 shows the rear view of the RackSwitch G8264CS.

Rear view of IBM RackSwitch G8264CS

Figure 4. Rear view of RackSwitch G8264CS


Power cord options

Power cords are not included with the switch. Table 3 lists the supported cords.

Table 3. Part numbers and feature codes for power cords

DescriptionPart number
System x options
Power Cord Europe AC plug 10A/250V39Y7917
Power Cord Europe (Denmark) AC plug 10A/250V39Y7918
Power Cord Europe (Switzerland) AC plug 10A/250V39Y7919
Power Cord Europe AC (Israel) plug 10A/250V39Y7920
Power Cord Europe (South Africa) AC plug 10A/250V39Y7922
Power Cord UK AC plug 13A/250V39Y7923
Power Cord Australia AC plug 10A/250V39Y7924
Power Cord Korea AC plug 10A/250V39Y7925
Power Cord India AC plug 10A/250V39Y7927
Power Cord China AC plug 16A/250V39Y7928
Power Cord Brazil AC plug 16A/250V39Y7929
Power Cord Uruguay/Argentina AC plug 16A/250V39Y7930
Power Cord US 2.8M AC Plug 10A/250V46M2592
Power Cord Japan 2.8M AC plug 12A/125V46M2593
Intelligent Cluster™ or iDataPlex
Rck Pwr Cord, 10A 1.5m C13-C1439Y7937
Rck Pwr Cord, 10A 4.3m C13-C1439Y7932
Rck Pwr Cord, 10A 2.8m C13-C2039Y7938
IBM Power Systems
LINECORD, TO WALL, 6', 100-127V/12A, IEC320/C13, PT#4Feature 6470
LINECORD, TO WALL/OEM PDU, 9' 100-127V/10A, IEC320/C13, PT#70Feature 6471
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#18Feature 6472
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#19Feature 6473
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#2Feature 6488
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#23Feature 6474
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#24Feature 6476
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#32Feature 6475
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#62Feature 6493
LINECORD, TO WALL/OEM PDU, 9', 200-240V/10A, IEC320/C13, PT#69Feature 6494
LINECORD, TO WALL/OEM PDU, 9', 200-240V/16A, IEC320/C13, PT#22Feature 6477
LINECORD, TO WALL/OEM PDU, 9', 250/10A, IEC320/C13, PT#6, INSULATEDFeature 6680


Warranty information

The RackSwitch includes an standard 3-year hardware and software warranty. The right is reserved to provide new features as part of future software releases. Software Upgrade Entitlement is based on the switch warranty or post warranty extension and service contracts.


Physical and environmental specifications

The switch has the following approximate dimensions and weight:
  • Width: 439 mm (17.3 inches)
  • Depth: 482 mm (19 inches)
  • Height: 1U or 45 mm (1.75 inches)
  • Weight: 10.35 kg (22.1 lb)

The switch requires the following power supplies:
  • Dual hot swap power modules, 50 - 60Hz, 100 - 240 VAC auto-switching per module
    • Front-to-rear models use the 550W CFF Power Supply modules.
    • Rear-to-front models use the 750W CFF Power Supply modules.
  • Typical power consumption of 330 watts

The switch has the following environmental specifications:
  • Temperature: Ambient operating: 0 °C to + 40 °C
  • Relative humidity: Non-condensing, operating 10 - 90%
  • Altitude: Operating 1,800 m (6,000 feet)
  • Typical heat dissipation: 1127 BTU/hour


Popular configurations

The following configurations are expected to be among the popular configurations that clients are likely to implement. Not all of these configurations are available at the time of writing. For more information, speak to your sales representative or Business Partner. For examples of official tested configurations, see the IBM System Storage® Interoperation Center (SSIC) at the following website:

http://ibm.com/systems/support/storage/ssic/interoperability.wss

RackSwitches undergo extensive FCoE testing to deliver network interoperability. For a full listing of supported FCoE and iSCSI configurations, visit the System Storage Interoperation Center (SSIC) at the following website:

http://ibm.com/systems/support/storage/ssic

Leveraging an existing LAN

Figure 5 shows how a client with existing SAN switches can use the G8264CS to simplify its rack environments by introducing Ethernet in the rack between the System x, or IBM Power Systems servers, and the G8264CS. The client can introduce Ethernet while breaking out the FC connections, at the top of the rack, to connect to the existing SAN switches, and then on to the client's storage devices.


Figure 5. Leveraging an existing LAN

Table 4 lists the supported components.

Table 4. Components
AdapterNIC configurationFCoE switchSAN switchStorage target (FC)OS levels
Emulex VFA II/III
(Adapter + FoD Key):

49Y7950 + 49Y4274
49Y7940 + 49Y4274
95Y3751 (Included)
90Y6456 + 90Y5178
95Y3762 + 95Y3760
88Y6454 + 95Y3760
pNIC
vNIC2
G8264CSCisco SAN
Brocade SAN
IBM Storwize® V3700
Storwize V7000
SAN Volume Controller
DS3K/5K
DS8K
Tape
IBM XIV®
Win2008
WS2012
ESX 4/5
RHEL 5/6
SLES 10/11
QLogic 8200 PCIe
(Adapter + FoD Key):

90Y4600 + 00Y5624
pNIC
vNIC2
G8264CSCisco SAN
Brocade SAN
Storwize V3700
Storwize N7000
SAN Volume Controller
DS3K/5K/8K
Tape, XIV
Win2008
WS2012
ESX 4/5
RHEL 5/6
SLES 10/11

Leveraging Ethernet further in an existing data center

Figure 6 shows an example of how a client might further simplify its data center by using Ethernet more in its data center before connecting to its existing SAN switches. This example shows how clients can use the RackSwitch G8124E to simplify its rack environments with Ethernet only. Ethernet can go down to the end of the row, or closer, to the client's storage where the client can install the G8264CS. Next, Ethernet can break out the FC connections at the top of the rack to connect to the existing SAN switches, and finally it can go on to the client's storage devices.


Figure 6. Leveraging Ethernet further in an existing data center

Table 5 summarizes the supported components.

Table 5. Components
AdapterNIC configurationTransitFCoE switchSAN switchStorage targetOS levels
Emulex VFA II/III
(Adapter + FoD key)

49Y7950 + 49Y4274
49Y7940 + 49Y4274
95Y3751 (Included)
90Y6456 + 90Y5178
95Y3762 + 95Y3760
88Y7429 + 95Y3760
pNIC
vNIC2
G8124EG8264CSCisco SAN
Brocade SAN
Storwize V3700
Storwize 7V000
SAN Volume Controller
DS3K/5K
DS8K
Tape
XIV
Win2008
WS2012
ESX 4/5
RHEL 5/6
SLES 10/11
QLogic 8200 PCIe
(Adapter + FoD Key)
90Y4600 + 00Y5624
pNIC
vNIC2
G8124EG8264CSCisco SAN
Brocade SAN
Storwize V3700
Storwize 7V000
SAN Volume Controller
DS3K/5K
DS8K
Tape
XIV
Win2008
WS2012
ESX 4/5
RHEL 5/6
SLES 10/11

Leveraging FCoE end to end in an existing data center

Figure 7 shows an example of how a client might further simplify their data center by further simplifying things and doing away with the FC SAN switching fabric and implementing and to end FCoE configuration. This example shows how clients can use the RackSwitch G8264 to simplify their rack environments with Ethernet only. Ethernet can go down to the end of the row, or closer, to the client's storage where the client can install the G8264CS to connect directly upstream to an IBM Storwize V3700/V7000 using simpler Ethernet connectivity.

Figure 7. Leveraging Ethernet further in an existing data center with end to end FCoE

Table 6 summarizes the supported components.

Table 6. Components

AdapterNIC configurationTransitFCoE switchSAN switchStorage targetOS levels
Emulex VFA II/III (Adapter + FoD Key)
49Y7950 + 49Y4274
49Y7940 + 49Y4274
95Y3751 (Included)
90Y6456 + 90Y5178
95Y3762 + 95Y3760
88Y6454 + 95Y3760
pNIC
vNIC1
G8264G8264CS FCF ModeNoneStorwize V3700
Storwize V7000
SAN Volume Controller
WS2012
ESX 5,
RHEL 6
SLES 11

Leveraging a BladeCenter environment

Figure 8 shows an example of how a client can use a BladeCenter environment. The client reduces costs inside the chassis with a single adapter in the blade, with a 10-Gb Ethernet adapter only in the chassis. The client can use the G8264CS at the top of the rack or somewhere else in the data center, breaking out the FC connections and connecting to the existing SAN switches, and then to the storage devices.


Figure 8. Leveraging a BladeCenter environment

Table 7 summarizes the supported components.

Table 7. Components
AdapterNIC configurationFCoE switchSAN switchStorage targetOS levels
Emulex VFA 2
(Adapter + FoD Key)
pNIC
vNIC2
G8264CSCisco SAN
Brocade SAN
Storwize V3700
Storwize V7000
SAN Volume Controller
DS3K/5K
DS8K
Tape
XIV
Win2008
WS2012
ESX 4/5
RHEL 5/6
SLES 10/11

Leveraging a Flex System or PureSystems environment: NPIV to FC SAN

Figure 9 shows an example of how a client can use the G8264CS for convergence in a Flex System or PureSystem environment. This approach can help a client significantly reduces costs inside the chassis with a single adapter in the compute node using CNA functionality, with a 10Gb Ethernet module, such as the SI4093/EN4093/EN4093R, in the chassis (no FC adapter or switches necessary). The client can then use the G8264CS at the top of the rack or somewhere else in the data center, breaking out the FC connections and connecting to the existing Brocade or Cisco SAN switches, and then in to the storage devices.


Figure 9. Leveraging G8264CS with a Flex or PureSystem environment

Table 8 summarizes the supported components.

Table 8. Components

AdapterNIC configurationTransitFCoE switchSAN switchStorage targetOS levels
LOM & CN4054 4-port adapter (BE3) pNIC, vNIC1, vNIC2, or UFPSI4093
EN4093
EN4093R
G8264CS NPIV modeBrocade or Cisco SANStorwize V3700
Storwize V7000
SVC
DS3K/5K
DS8K
Tape
XIV
Win2008, WS2012, W@ 2012 HyperV + NPIV, ESX 4/5/5.1/5.5, VMware ESX 5.1/5.5 + NPIV
CN4058 -8-port adapterpNICRHEL 5/6/7, SLES 10/11
RHEL 5/6/7, SLES 11/12, AIX6/7, VIOS 221/222, IBM I6/7 VIOS

Leveraging a Flex System or PureSystems environment: FCoE end-to-end

Figure 10 shows an example of how a client can use the G8264CS for convergence in a Flex System or PureSystem environment to connect directly into their storage using FCoE. This approach can help clients significantly reduce costs inside the chassis by removing the need for FC SAN switches between the G8264CS and the storage. In the chassis, you simply have a single adapter in the compute node using CNA functionality, with a 10Gb Ethernet module, such as the SI4093/EN4093/EN4093R, in the chassis (no FC adapter or switches necessary). The client can then use the G8264CS at the top of the rack or elsewhere in the data center and then connect directly into the storage device.

Note: FLOGI is used to obtain a routable FCID for use in the FC frame exchange between the G8264CS and the Storwize V7000. The switch provides the FCID during a FLOGI exchange.

Figure 10. Leveraging G8264CS for end to end FCoE with a Flex or PureSystem environment

Table 9 summarizes the supported components.

Table 9. Components

AdapterNIC configurationTransitFCoE switchSAN switchStorage targetOS levels
LOM & CN4054 4-port adapter (BE3)pNIC, vNIC1SI4093
EN4093
EN4093R
G8264CS FCF modeNoneStorwize V3700
Storwize V7000
SAN Volume Controller
WS2008, WS2012, WS2012 HyperV + NPIV, ESX 4/5/5.1/5.5, VMware ESX 5.1/5.5 + NPIV, RHEL 5/6, SLES 10/11
CN4058 8-port adapter pNICRHEL 5/6/7, SLES 11/12, AIX6/7, VIOS 221/222

Build 252-node POD/cluster with Flex System Interconnect Fabric

With the growth of cloud, media applications, mobile connections and big data clients, IT departments are faced with many new requirements. Flex System® Interconnect Fabric is designed to meet client needs by providing a simple Ethernet fabric cluster that accelerates deployment, simplifies management, and enables dynamic scalability, increases reliability, availability and security in medium to large scale POD deployments. This solution offers a solid foundation of compute, network, storage, and software resources in a Flex System POD.

The key I/O components of this solution consist of a pair of RackSwitch G8264CS. One of the G8264CS will be the center of intelligence and provides all direction and updates to the redundant G8264CS and the 2-18 Flex System SI4093 System Interconnect Modules. By using Flex System x222 compute nodes, clients can easily setup a single chassis and then scale up to nine chassis with easy to build a 252-node POD or cluster. In addition, the automated capabilities of adding additional chassis after initial setup of the first client can also exploit the acquisition and operation cost savings of converging Ethernet and Fibre Channel traffic within the POD/cluster, but still be able to simply connect into their existing upstream networks.

Figure 11 shows the Flex System Interconnect Fabric using 9 chassis with SI4093, and a pair of RackSwitch G8264CS connecting to a clients existing LAN and SAN which could be a Brocade switch, or a Cisco MDS.


Figure 11. Flex System Interconnect Fabric using 9 chassis
The solution components that are used in the scenarios that are shown in Figure 11 are listed in Table 10.

Table 10. Building a Flex System Interconnect Fabric POD using FCoE (refers to Figure 11)
Diagram reference numberDescription - part number - quantity
1RackSwitch G8264CS - 7309DRX - 2 per POD
  • For upstream connections to the LAN simply leverage the SFP+ or QSFP+ ports and the appropriate cables/transceivers
  • For upstream connections to the Brocade/Cisco FC SAN leverage the OmniPorts and the 8Gb FC transceivers
2Flex System Chassis – supports 1-9 chassis:
  • Compute nodes with appropriate Virtual Fabric LOM, CN4054 Virtual Fabric Adapter or CN4058 adapter - 90Y3554 - 1 per server
  • Flex System Virtual Fabric Adapter Upgrade for LOM or CN4054 - 90Y3558 - 1 per VFA
  • Flex System Fabric SI4093 System Interconnect Module - 95Y3313 - 2 per chassis
  • Flex System Fabric SI4093 System Interconnect Module (Upgrade 1) – Assumes we are using fully populated chassis - 95Y3318 - 1 per SI4093
3Independent of upstream LAN connect to your existing network vendor of choice
4Brocade or Cisco MDS SAN fabric

The IBM System Storage® FC disk controllers can be selected from:
  • IBM System Storage DS3000 / DS5000
  • IBM System Storage DS8000®
  • IBM Storwize® V7000 / SAN Volume Controller
  • IBM XIV®

Related publications

For more information, see the following RackSwitch G8264CS product publications, which are available at:
http://ibm.com/support/entry/portal/Documentation
  • Application Guide
  • Industry-Standard CLI Reference
  • Browser-Based Interface (BBI) Quick Guide
  • Menu-based CLI Command Reference

For announcement letters, sale manuals, or both, see the Offering Information page at:
http://www.ibm.com/common/ssi/index.wss?request_locale=en

On this page, enter RackSwitch G8264CS, select the information type, and then click Search. On the next page, narrow your search results by geography and language.

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
05 February 2013

Last Update
12 November 2014


Rating:
(based on 2 reviews)


Author(s)

IBM Form Number
TIPS0970