Skip to main content

IBM Flex System CN4058 8-port 10Gb Converged Adapter

Web Doc

thumbnail 

Published on 13 November 2012, updated 20 October 2014

  1. View in HTML
  2. .PDF (1.0 MB)

Share this page:   

IBM Form #: TIPS0909


Authors: David Watts

    menu icon

    Abstract

    The IBM Flex System™ CN4058 8-port 10Gb Converged Adapter is an 8-port 10Gb converged network adapter (CNA) for Power Systems compute nodes that supports 10 Gb Ethernet and FCoE. With hardware protocol offloads for TCP/IP and FCoE standard, the CN4058 8-port 10Gb Converged Adapter provides maximum bandwidth with minimum use of CPU resources. This is key in IBM Virtual I/O Server (VIOS) environments because it enables more VMs per server, providing greater cost saving to optimize return on investment. With eight ports, it takes full advantage of capabilities of all Ethernet switches in the IBM Flex System portfolio.

    Changes in the October 17 & 20 updates:

    * Added the x240 M5 compute node to Table 2

    * General administrative update

    Contents

    The IBM Flex System™ CN4058 8-port 10Gb Converged Adapter is an 8-port 10Gb converged network adapter (CNA) for Power Systems compute nodes that supports 10 Gb Ethernet and FCoE.

    With hardware protocol offloads for TCP/IP and FCoE standard, the CN4058 8-port 10Gb Converged Adapter provides maximum bandwidth with minimum use of CPU resources. This is key in IBM Virtual I/O Server (VIOS) environments because it enables more VMs per server, providing greater cost saving to optimize return on investment. With eight ports, it takes full advantage of capabilities of all Ethernet switches in the IBM Flex System portfolio.

    Figure 1 shows the adapter.

    IBM Flex System CN4058 8-port 10Gb Converged Adapter
    Figure 1. IBM Flex System CN4058 8-port 10Gb Converged Adapter


    Did you know?

    IBM Flex System is a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage. IBM Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increase resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. IBM Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments.


    Part number information

    Table 1. Ordering part numbers and feature codes

    DescriptionPart numberFeature code
    (x-config)
    Feature code
    (e-config)
    IBM Flex System CN4058 8-port 10Gb Converged AdapterNoneNoneEC24


    Features

    The IBM Flex System CN4058 8-port 10Gb Converged Adapter has these features:
    • Eight-port 10 Gb Ethernet adapter
    • Dual-ASIC controller using the Emulex XE201 (Lancer) design
    • PCIe Express 2.0 x8 host interface (5 GT/s)
    • MSI-X support
    • IBM Fabric Manager support

    Ethernet features
    • IPv4/IPv6 TCP and UDP checksum offload; Large Send Offload (LSO); Large Receive Offload; Receive Side Scaling (RSS); TCP Segmentation Offload (TSO)
    • VLAN insertion and extraction
    • Jumbo frames up to 9000 Bytes
    • Priority Flow Control (PFC) for Ethernet traffic
    • Network boot
    • Interrupt coalescing
    • Load balancing and failover support, including adapter fault tolerance (AFT), switch fault tolerance (SFT), adaptive load balancing (ALB), link aggregation, and IEEE 802.1AX

    FCoE features
    • Common driver for CNAs and HBAs
    • 3,500 N_Port ID Virtualization (NPIV) interfaces (total for adapter)
    • Support for FIP and FCoE Ether Types
    • Fabric Provided MAC Addressing (FPMA) support
    • 2048 concurrent port logins (RPIs) per port
    • 1024 active exchanges (XRIs) per port

    Note: The CN4058 does not support iSCSI hardware offload.

    Standards

    The adapter supports the following IEEE standards:
    • PCI Express base spec 2.0, PCI Bus Power Management Interface, rev. 1.2, Advanced Error Reporting (AER)
    • IEEE 802.3ap (Ethernet over Backplane)
    • IEEE 802.1q (VLAN)
    • IEEE 802.1p (QoS/CoS)
    • IEEE 802.1AX (Link Aggregation)
    • IEEE 802.3x (Flow Control)
    • Enhanced I/O Error Handing (EEH)
    • Enhanced Transmission Selection (ETS) (P802.1Qaz)
    • Priority-based Flow Control (PFC) (P802.1Qbb)
    • Data Center Bridging Capabilities eXchange Protocol, CIN-DCBX and CEE-DCBX (P802.1Qaz)

    Supported servers

    The following table lists the IBM Flex System compute nodes that support the adapters.

    Table 2. Supported servers
    DescriptionFeature code
    x220 (7906)
    x222 (7916)
    x240 (8737, E5-2600)
    x240 (8737, E5-2600 v2)
    x240 M5 (9532)
    x440 (7917)
    x280 / x480 / x880 X6 (7903)
    p24L (1457)
    p260 (7895)
    p270 (7954)
    p460 (7895)
    IBM Flex System CN4058 8-port 10Gb Converged AdapterEC24
    N
    N
    N
    N
    N
    N
    N
    Y
    Y
    Y
    Y

    See IBM ServerProven at the following web address for the latest information about the expansion cards that are supported by each blade server type:
    http://ibm.com/servers/eserver/serverproven/compat/us/

    I/O adapter cards are installed in the slot in supported servers, such as the p260, as highlighted in the following figure.

    Location of the I/O adapter slots in the p260 Compute Node
    Figure 2. Location of the I/O adapter slots in the IBM Flex System p260 Compute Node


    Supported I/O modules

    These adapters can be installed in any I/O adapter slot of a supported IBM Flex System compute node. One or two compatible 1 Gb or 10 Gb I/O modules must be installed in the corresponding I/O bays in the chassis. The following table lists the switches that are supported. When connected to the 1Gb switch, the adapter will operate at 1 Gb speeds.

    To maximize the number of adapter ports usable, switch upgrades must also be ordered as indicated in the following table. Alternatively, for CN4093, EN4093R, and SI4093 switches, you can use Flexible Port Mapping, a new feature of Networking OS 7.8, that allows you to minimize the number of upgrades needed. See the Product Guides for the switches for more details:
    http://www.redbooks.ibm.com/portals/puresystems?Open&page=pg&cat=switches

    The table also specifies how many ports of the CN4058 adapter are supported once all indicated upgrades are applied. Switches should be installed in pairs to maximize the number of ports enabled and to provide redundant network connections.

    Table 3. I/O modules and upgrades for use with the CN4058 adapter
    DescriptionFeature code
    (e-config)
    Port count
    (per pair
    of switches)*
    Internal
    switch
    ports
    enabled
    IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
    + CN4093 10Gb Converged Scalable Switch (Upgrade 1) #ESU1
    + CN4093 10Gb Converged Scalable Switch (Upgrade 2) #ESU2
    ESW26INTAx
    INTBx
    INTCx
    IBM Flex System Fabric EN4093R 10Gb Scalable Switch
    + EN4093 10Gb Scalable Switch (Upgrade 1) #3596
    + EN4093 10Gb Scalable Switch (Upgrade 2) #3597
    ESW76INTAx
    INTBx
    INTCx
    IBM Flex System Fabric EN4093 10Gb Scalable Switch
    + EN4093 10Gb Scalable Switch (Upgrade 1) #3596
    + EN4093 10Gb Scalable Switch (Upgrade 2) #3597
    3593**6INTAx
    INTBx
    INTCx
    IBM Flex System EN4091 10Gb Ethernet Pass-thru37002INTAx
    IBM Flex System Fabric SI4093 System Interconnect Module
    + SI4093 System Interconnect Module (Upgrade 1) #ESW8
    + SI4093 System Interconnect Module (Upgrade 2) #ESW9
    ESWA6INTAx
    INTBx
    INTCx
    IBM Flex System EN2092 1Gb Ethernet Scalable Switch
    + EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) #3594
    35984INTAx
    INTBx
    IBM Flex System EN4023 10Gb Scalable Switch
    + IBM Flex System EN4023 10Gb Scalable Switch (Upgrade 1)
    + IBM Flex System EN4023 10Gb Scalable Switch (Upgrade 2)
    ESWD
    ESWE
    ESWF
    6INTAx
    INTBx
    INTCx
    Cisco Nexus B22 Fabric Extender for IBM Flex SystemESWB2INTAx
    * This column indicates the number of adapter ports that will be active if all upgrades are installed.
    ** Withdrawn from marketing

    Note: Adapter ports 7 and 8 are reserved for future use. The chassis supports all eight ports but there are currently no switches available that connect to these ports.

    The following table shows the connections between adapters installed in the compute nodes to the switch bays in the chassis.

    Table 4. Adapter to I/O bay correspondence
    I/O adapter slot
    in the server
    Port on the adapter*
    Corresponding I/O module bay
    in the chassis
    Bay 1Bay 2Bay 3Bay 4
    Slot 1Port 1YesNoNoNo
    Port 2NoYesNoNo
    Port 3YesNoNoNo
    Port 4NoYesNoNo
    Port 5YesNoNoNo
    Port 6NoYesNoNo
    Port 7**YesNoNoNo
    Port 8**NoYesNoNo
    Slot 2Port 1NoNoYesNo
    Port 2NoNoNoYes
    Port 3NoNoYesNo
    Port 4NoNoNoYes
    Port 5NoNoYesNo
    Port 6NoNoNoYes
    Port 7**NoNoYesNo
    Port 8**NoNoNoYes
    Slot 3
    (p460 only)
    Port 1YesNoNoNo
    Port 2NoYesNoNo
    Port 3YesNoNoNo
    Port 4NoYesNoNo
    Port 5YesNoNoNo
    Port 6NoYesNoNo
    Port 7**YesNoNoNo
    Port 8**NoYesNoNo
    Slot 4
    (p460 only)
    Port 1NoNoYesNo
    Port 2NoNoNoYes
    Port 3NoNoYesNo
    Port 4NoNoNoYes
    Port 5NoNoYesNo
    Port 6NoNoNoYes
    Port 7**NoNoYesNo
    Port 8**NoNoNoYes
    * The use of adapter ports 3, 4, 5, and 6 require upgrades to the switches as described in Table 3. The EN4091 Pass-thru only supports ports 1 and 2 (and only when two Pass-thru modules are installed).
    ** Adapter ports 7 and 8 are reserved for future use. The chassis supports all eight ports but there are currently no switches available that connect to these ports.

    The following figure shows the internal layout of the CN4058 for consideration when ports are assigned for use on VIOS for TCP and FCP traffic when used with a CN4093, EN4093R, or SI4093 switch. Red lines indicate connections from ASIC 1 on the CN4058 adapter and blue lines are the connections from ASIC 2. The dotted blue lines are reserved for future use when switch are offered that support all 8 ports of the adapter.


    Figure 3. Internal layout of the CN4058 adapter connected to CN4093, EN4093R, or SI4093 switch

    Table 3 indicates which internal switch ports are enabled (INTAx, INTBx, etc) when all switch upgrade are enabled.

    Dual VIOS note: Enabling both switch upgrade licenses enables all 42 internal ports, the “A”, “B”, and “C” sets. The first ASIC connects to one “A”, one “B”, and two “C” ports (the red lines). The second ASIC connects to one “A” and one “B” port (the solid blue lines). The other two ports from the second ASIC are unused (dotted blue lines). The implication is if each ASIC is assigned to a different VIOS and both upgrades are installed, the first VIOS has four active ports and the second VIOS has two active ports.

    The connections between the CN4058 8-port adapters installed in the compute nodes and the switch bays in the chassis are shown diagrammatically in the following figure. The figure shows both half-wide servers, such as the p260 or p270 with two adapters, and full-wide servers, such as the p460 with four adapters.

    Logical layout of the interconnects between I/O adapters and I/O modules
    Figure 4. Logical layout of the interconnects between I/O adapters and I/O modules


    FCoE support

    The following two tables list FCoE support using Fibre Channel targets for the CN4058 8-port 10Gb Converged Adapter.

    Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage® Interoperation Center (SSIC) web site:
    http://ibm.com/systems/support/storage/ssic/interoperability.wss

    Table 5. FCoE support using FC targets
    Flex System
    I/O Module
    FC Forwarder
    (FCF)
    Supported
    SAN Fabric
    Operating
    system
    Storage targets
    CN4093 10Gb Converged Switch
    IBM B-type
    Cisco MDS
    AIX 6.1
    AIX 7.1
    VIOS 2.2
    SLES 11.2
    RHEL 6.4
    IBM DS8000
    IBM SVC
    IBM Storwize V7000
    V7000 Storage Node
    IBM XIV
    EN4093 10Gb Switch
    EN4093R 10Gb Switch
    Brocade VDX 6730
    IBM B-type
    EN4093 10Gb Switch
    EN4093R 10Gb Switch
    Cisco Nexus 5548
    Cisco Nexus 5596
    Cisco MDS

    The following table lists FCoE support using native FCoE targets (that is, end-to-end FCoE).

    Table 6. FCoE support using FCoE targets (end-to-end FCoE)
    Flex System
    I/O Module
    Operating
    system
    Storage targets
    CN4093 10Gb Converged Switch
    AIX 6.1
    AIX 7.1
    VIOS 2.2
    SLES 11.2
    RHEL 6.4
    IBM Storwize V7000 Storage Node (FCoE)


    Operating system support

    The IBM Flex System CN4058 8-port 10Gb Converged Adapter supports the following operating systems:
    • AIX Version 6.1
    • AIX Version 7.1
    • IBM i 6.1
    • IBM i 7.1
    • IBM Virtual I/O Server
    • Red Hat Enterprise Linux 5 for IBM POWER
    • Red Hat Enterprise Linux 6 for IBM POWER
    • SUSE LINUX Enterprise Server 11 for IBM POWER

    Support for operating systems is based on the combination of the expansion card and the blade server on which it is installed. See the IBM ServerProven website for the latest information about the specific versions and service packs supported. Select the blade server, and then select the expansion card to see the supported operating systems: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/

    For the latest information about installing Linux on IBM Power Systems, see:
    http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/info/LinuxAlerts.html


    Warranty

    There is a 1-year, customer-replaceable unit (CRU) limited warranty. When installed in a server, these adapters assume your system’s base warranty and any IBM ServicePac® upgrade.


    Physical specifications

    The dimensions and weight of the adapter are as follows:
    • Width: 100 mm (3.9 in.)
    • Depth: 80 mm (3.1 in.)
    • Weight: 13 g (0.3 lb)

    Shipping dimensions and weight (approximate):
    • Height: 58 mm (2.3 in.)
    • Width: 229 mm (9.0 in.)
    • Depth: 208 mm (8.2 in.)
    • Weight: 0.4 kg (0.89 lb)


    Regulatory compliance

    The adapter conforms to the following regulatory standards:
    • United States FCC 47 CFR Part 15, Subpart B, ANSI C63.4 (2003), Class A
    • United States UL 60950-1, Second Edition
    • IEC/EN 60950-1, Second Edition
    • FCC - Verified to comply with Part 15 of the FCC Rules, Class A
    • Canada ICES-003, issue 4, Class A
    • UL/IEC 60950-1
    • CSA C22.2 No. 60950-1-03
    • Japan VCCI, Class A
    • Australia/New Zealand AS/NZS CISPR 22:2006, Class A
    • IEC 60950-1(CB Certificate and CB Test Report)
    • Taiwan BSMI CNS13438, Class A
    • Korea KN22, Class A; KN24
    • Russia/GOST ME01, IEC-60950-1, GOST R 51318.22-99, GOST R 51318.24-99, GOST R 51317.3.2-2006, GOST R 51317.3.3-99
    • IEC 60950-1 (CB Certificate and CB Test Report)
    • CE Mark (EN55022 Class A, EN60950-1, EN55024, EN61000-3-2, EN61000-3-3)
    • CISPR 22, Class A


    Popular configurations

    The adapters can be used in various configurations.

    Ethernet configuration

    The following figure shows CN4058 8-port 10Gb Converged Adapters installed in both slots of the p260, which in turn is installed in the chassis. The chassis also has four IBM Flex System Fabric EN4093R 10Gb Scalable Switches, each with both Upgrade 1 and Upgrade 2 installed, enabling 42 internal ports on each switch. The switch configuration enables 6 of the 8 ports on the CN4058 adapter.

    Example configuration
    Figure 5. Example configuration

    The following table lists the parts that are used in the configuration.

    Table 7. Components used when connecting the adapter to the 10 GbE switches
    Model / featureDescriptionQuantity
    7895-23XIBM Flex System p260 Compute Node1 to 14
    EC24IBM Flex System CN4058 8-port 10Gb Converged Adapter2 per server
    7893-92XIBM Flex System Enterprise Chassis1
    ESW7IBM Flex System Fabric EN4093R 10Gb Scalable Switch 4
    3596IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 1)4
    3597IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 2)4

    FCoE configuration using a Brocade SAN

    The CN4058 adapter can be used with the EN4093R 10Gb Scalable Switch as Data Center Bridging (DCB) switches that can transport FCoE frames using FCoE Initialization Protocol (FIP) snooping. The encapsulated FCoE packets are sent to the Brocade VDX 6730 Fibre Channel Forwarder (FCF) which is functioning as both an aggregation switch and an FCoE gateway, as shown in the following figure.

     FCoE solution using the EN4093R as an FCoE transit switch with the Brocade VDX 6730 as an FCF
    Figure 6. FCoE solution using the EN4093R as an FCoE transit switch with the Brocade VDX 6730 as an FCF

    The solution components used in the scenario depicted in the figure are listed in the following table.

    Table 8. FCoE solution using the EN4093R as an FCoE transit switch with the Brocade VDX 6730 as an FCF
    Diagram
    reference
    DescriptionFeature
    code
    Quantity
    1IBM Flex System FCoE solution
    IBM Flex System CN4058 8-port 10Gb Converged AdapterEC241 per server
    IBM Flex System Fabric EN4093R 10Gb Scalable SwitchESW72 per chassis
    IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 1)35961 per EN4093R
    IBM Flex System Fabric EN4093 10Gb Scalable Switch (Upgrade 2)35971 per EN4093R
    2Brocade VDX 6730 Converged Switch for IBM
    IBM B-type or Brocade SAN fabric
    IBM System Storage FC disk controllers
    IBM System Storage DS3000 / DS5000
    IBM System Storage DS8000
    IBM Storwize V7000 / SAN Volume Controller
    IBM XIV

    IBM provides extensive FCoE testing to deliver network interoperability. For a full listing of IBM supported FCoE and iSCSI configurations, see the System Storage Interoperation Center (SSIC) website at:
    http://ibm.com/systems/support/storage/ssic


    Related publications

    For more information refer to the following resources:

     

    Others who read this also read

    Special Notices

    The material included in this document is in DRAFT form and is provided 'as is' without warranty of any kind. IBM is not responsible for the accuracy or completeness of the material, and may update the document at any time. The final, published document may not include any, or all, of the material included herein. Client assumes all risks associated with Client's use of this document.