IBM Flex System p270 Compute Node

IBM Redbooks Product Guide

Abstract

The IBM® Flex System™ p270 Compute Node is a server that is based on IBM POWER® architecture technologies. This compute node runs in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment, using advanced processing technology. The compute node supports IBM AIX®, IBM i, or Linux operating environments and can run a wide variety of workloads. The p270 is a standard compute form factor with two IBM POWER7+™ dual-chip module (DCM) processor sockets.

In the December 24 & 26 update:
* Corrected the link for more information about supported operating systems
* Clarified the RAID 0 and RAID 10 definitions


Video Walk-thru of the p270 Compute Node

Contents


Table of contents


Introduction

The IBM® Flex System p270 Compute Node is a server that is based on IBM POWER® architecture technologies. This compute node runs in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment, using advanced processing technology. The compute node supports IBM AIX®, IBM i, or Linux operating environments and can run a wide variety of workloads. The p270 is a standard compute form factor with two IBM POWER7+™ dual-chip module (DCM) processor sockets.

Figure 1 shows the IBM Flex System p270 Compute Node.

Figure 1. The IBM Flex System p270 Compute Node
Figure 1. The IBM Flex System p270 Compute Node

Did you know?

The p270 uses the new POWER7+ dual-chip modules (DCM). DCMs pack two processor chips in to each of the modules that are installed in the processor sockets on the system board. Double the number of processor chips per socket results in double the number of processor cores overall, which results in more computing power compared to the single-chip modules in the predecessor, the p260, as evidenced by significantly higher performance metrics.

Key features

The IBM Flex System p270 Compute Node is a high-performance POWER7+ -based server that is optimized for virtualization, performance, and efficiency. The POWER7+ processors contain technology and features that build on the IBM POWER7® baseline. This section describes the key features of the compute node.

Scalability and performance

The compute node offers numerous features to boost performance, improve scalability, and reduce costs:

  • Based on the proven IBM POWER7 architecture, the POWER7+ processors improve productivity by offering superior system performance with AltiVec floating point and integer SIMD instruction set acceleration.
  • Two processor modules each with 12 cores and a total of 120 MB L3 cache (10 MB per core) maximize the concurrent execution of applications.
  • Choice of processor core frequencies: 3.1 GHz or 3.4 GHz
  • Integrated IBM PowerVM® technology, which provides superior virtualization performance and flexibility.
  • Up to 16 DDR3 ECC memory RDIMMs that provide a memory capacity of up to 512 GB.
  • Optional support for IBM Active Memory™ Expansion, which allows the effective maximum memory capacity to be much larger than the true physical memory through innovative compression techniques.
  • The use of solid-state drives (SSDs) instead of traditional spinning drives (HDDs), which can significantly improve I/O performance. An SSD can support up to 100 times more I/O operations per second (IOPS) than a typical HDD.
  • Up to eight 10 Gb Ethernet ports per compute node maximize networking resources in a virtualized environment.
  • Includes two P7IOC high-performance I/O bus controllers to maximize throughput and bandwidth.
  • Support for up to two high-bandwidth I/O adapters. Support for 10 Gb Ethernet, 16 Gb Fibre Channel, and QDR InfiniBand.
  • Support for the optional IBM Flex System Dual VIOS Adapter, which provides a second integrated SAS controller enabling dual VIOS support with two internal disks.

Availability and serviceability

The p270 provides many features to simplify serviceability and increase system uptime:
  • ECC and Chipkill provide error detection and recovery in the event of a non-correctable memory failure.
  • Tool-less cover removal provides easy access to upgrades and serviceable parts, such as drives, memory, and adapters.
  • A light path diagnostics panel and individual light path LEDs quickly lead the technician to failed (or failing) components. This simplifies servicing, speeds up problem resolution, and helps improve system availability.
  • Predictive Failure Analysis (PFA) detects when system components (for example, processors, memory, and hard disk drives) operate outside of standard thresholds and generates proactive alerts in advance of possible failure, therefore increasing uptime.
  • Available solid-state drives (SSDs) offer significantly better reliability than traditional mechanical HDDs for greater uptime.
  • A built-in Integrated Flexible Service Processor (FSP) continuously monitors system parameters, triggers alerts, and performs recovery actions in case of failures to minimize downtime.
  • A front panel USB port for upgrades and local servicing tasks.
  • Three-year customer replaceable unit and onsite limited warranty, next business day 9 work hours, 5 days a week (9x5). Optional service upgrades are available.

Manageability and security

Powerful systems management features simplify management of the p270:
  • Includes an FSP to monitor the compute node's availability and to provide diagnostic information.
  • Integrates with the IBM Flex System™ Manager for proactive systems management. It offers comprehensive systems management for the entire IBM Flex System platform, which helps increase uptime, reduce costs, and improve productivity through advanced server management capabilities.
  • Optional support for management through a Hardware Management Console (HMC) or an Integrated Virtualization Manager (IVM) appliance.

Energy efficiency

The compute node offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment:
  • The component-sharing design of the IBM Flex System chassis provides ultimate power and cooling savings.
  • Support for IBM EnergyScale™ to dynamically optimize processor performance versus power consumption and system workload.
  • SSDs consume as much as 80% less power than traditional spinning 2.5-inch HDDs.
  • The compute node uses hexagonal ventilation holes, a part of IBM Calibrated Vectored Cooling™ technology. Hexagonal holes can be grouped more densely than round holes, providing more efficient airflow through the system.

Locations of key components and connectors

The following figure shows the front of the p270.

Front view of the IBM Flex System p270 Compute Node
Figure 2. Front view of the IBM Flex System p270 Compute Node

The following figure shows the locations of the key components inside the p270.

Inside view of the IBM Flex System p270 Compute Node
Figure 3. Inside view of the IBM Flex System p270 Compute Node

Standard specifications

The following table lists the standard specifications.

Table 1. Standard specifications

ComponentsSpecification
Model number7954-24X
Form factorStandard-width compute node
Chassis supportIBM Flex System Enterprise Chassis
ProcessorTwo IBM POWER7+ Dual Chip Modules. Each Dual Chip Module (DCM) contains two processor chips, each with six cores (24 cores total). Cores have a frequency of 3.1 or 3.4 GHz and each core has 10 MB of L3 cache (240 MB L3 cache total). Integrated memory controllers with four memory channels from each DCM. Each memory channel operates at 6.4 Gbps. One GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology.
ChipsetIBM P7IOC I/O hub.
Memory16 DIMM sockets. RDIMM DDR3 memory is supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX V6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs are supported, although only VLP DIMMs are supported if internal HDDs are configured. The usage of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs.
Memory maximums512 GB using 16x 32 GB DIMMs.
Memory protectionECC, Chipkill.
Disk drive baysTwo 2.5-inch non-hot-swap drive bays supporting 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, then only 1.8-inch SSDs are supported. If VLP DIMMs are installed, then both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together.
Maximum internal storage 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives.
SAS controllerIBM ObsidianE SAS controller embedded on system board connects to the two local drive bays. Supports 3 Gbps SAS with a PCIe 2.0 x8 host interface. Supports RAID 0 and RAID 10 with two drives. A second Obsidian SAS controller is available through the optional IBM Flex System Dual VIOS Adapter. When the Dual VIOS Adapter is installed, each SAS controller controls one drive.
RAID supportWithout the Dual VIOS Adapter installed: RAID 0 and RAID 10 (two drives)
With the Dual VIOS Adapter installed: RAID 0 (one drive to each SAS controller)
Network interfacesNone standard. Optional 1Gb or 10Gb Ethernet adapters.
PCI Expansion slotsTwo I/O connectors for adapters. PCIe 2.0 x16 interface.
PortsOne external USB port.
Systems management FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager™, and IBM Systems Director. Optional support for a Hardware Management Console (HMC) or an Integrated Virtualization Manager (IVM) console.
Security featuresFSP password, selectable boot sequence.
VideoNone. Remote management through Serial over LAN and IBM Flex System Manager.
Limited warranty 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
Operating systems supported IBM AIX, IBM i, and Linux. See "Supported operating systems" for details.
Service and supportOptional service upgrades are available through IBM ServicePac® offerings: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software.
DimensionsWidth: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
WeightMaximum configuration: 7.7 kg (17.0 lb).

The compute node is shipped with the following items:
  • Statement of Limited Warranty
  • Important Notices
  • Documentation CD that contains the Installation and User's Guide

Chassis support

The p270 is supported in the IBM Flex System Enterprise Chassis.

Up to 14 p270 Compute Nodes can be installed in the chassis. The actual number of systems that can be installed in a chassis depends on these factors:

  • The number of power supplies that are installed
  • The capacity of the power supplies that are installed (2100 W or 2500 W)
  • The power redundancy policy that is used (N+1 or N+N)

The following table provides guidelines about what number of p270 systems can be installed. For more guidance, use the Power Configurator, which can be found at the following website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html

In the table:
  • Green = No restriction to the number of compute nodes that are installable
  • Yellow = Some bays must be left empty in the chassis

Table 2. Maximum number of p270 Compute Nodes that are installable based on the power supplies that are installed and the power redundancy policy that is used
Power supplies instaled in the chassisN+1, N=5
6 power supplies
N+1, N=4
5 power supplies
N+1, N=3
5 power supplies
N+N, N=3
6 power supplies
2100 W141299
2500 W14141212

Processor features

The compute node supports the processor features that are listed in the following table.

Table 3. Processor features for the p270

Feature codeProcessor descriptionMinimumMaximum
EPRF12-core 3.1 GHz POWER7+ Dual Chip Module22
EPRE12-core 3.4 GHz POWER7+ Dual Chip Module22
2319Factory Deconfiguration of one core023

Memory features

IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostics panel for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide.

The compute node supports low profile (LP) or very low profile (VLP) DDR3 memory RDIMMs. If LP memory is used, 2.5-inch drives are not supported in the system because of physical space restrictions. However, 1.8-inch SSDs are still supported. If VLP memory is used, either 2.5-inch HDDs or 1.8-inch SSDs are supported.

The p270 supports up to 16 DIMMs. Each DCM has four memory channels that are enabled, and there are two DIMMs per channel. All supported DIMMs operate at 1066 MHz.

The following table lists memory features that are available for the compute node. DIMMs are ordered and can be installed two at a time, but to maximize memory performance, install them in sets of eight for the p270 (one for each of the memory channels).

Table 4. Memory features

Feature codeDescriptionForm factor
81968 GB (2x 4 GB RDIMMs) DDR3 1066 MHz System MemoryVLP
EEMD16 GB (2 x 8 GB RDIMMs) 4Gb DDR3 1066 MHz System MemoryVLP
EEME*32 GB (2 x 16 GB RDIMMs) 4Gb DDR3 1066 MHz System MemoryLP
EEMF*64 GB (2 x 32GB RDIMMs) 4Gb DDR3 1066 MHz System MemoryLP
* If 2.5-inch drives are installed, low-profile DIMM features cannot be used, and EEME and EEMF cannot be used.

Internal disk storage options

The p270 Compute Node has two non-hot-swap drive bays that are attached to the cover of the system. Either two 2.5-inch SAS HDDs or two 1.8-inch SATA SSDs can be installed. If 2.5-inch HDDs are installed, then LP memory DIMMs are not supported; only VLP DIMMs are supported. The use of 1.8-inch SSDs does not limit the memory DIMMs that are used. SSDs and HDDs cannot be mixed. All drives are non-hot-swap.

The following figure shows the drives that are installed on the underside of the cover of the p270.

Figure 6. Internal drives in the p270
Figure 4. Internal drives in the p270

The p270 has an onboard SAS controller that manages the two, non-hot-swap internal drives. Optionally, the compute node supports an additional SAS controller (the IBM Flex System Dual VIOS Adapter, feature EC2F) that is installed in the Expansion connector and can then split control of the drives to be one drive to each controller. This split allows for dual VIOS support with internal disks. The Dual VIOS Adapter is physically installed under I/O adapter 2 (see the preceeding figure) and does not impede the use of any I/O adapter slots.

Without the Dual VIOS Adapter installed and two drives installed, RAID 0 and RAID 10 are the supported RAID levels. With the Dual VIOS Adapter installed, only RAID 0 (no redundancy) is supported because each SAS controller controls only one drive.

The RAID 0 on a single drive is implemented as a 520-byte stripe consisting of an 8-byte header, 512-byte data, 4-byte copy of CRC32 and 4-byte CRC32. The RAID 10 implementation on 2 drives is a mirror across the two disks with 528-byte stripe similar to RAID 0. RAID 10 as defined in the p270 is similar to the industry standard RAID 1 definition.

The following table lists the supported drive options. SAS drives operate at 3 Gbps when they are installed in the p270.

Table 5. Drive options for internal disk storage

Feature codeDescriptionMaximum supported
Optional second SAS adapter, installed in expansion port
EC2FIBM Flex System Dual VIOS Adapter1
2.5-inch SAS HDDs
8274300 GB 10K RPM non-hot-swap 6 Gbps SAS2
8276600 GB 10K RPM non-hot-swap 6 Gbps SAS2
8311900 GB 10K RPM non-hot-swap 6 Gbps SAS2
1.8-inch SSDs
8207177 GB SATA non-hot-swap SSD2

The following table lists the top cover options because you must select the cover feature that matches the drives you want to install: 2.5-inch drives, 1.8-inch drives, or no drives.

Table 6. Top cover options for the p270
Feature codeDescription
7069Top cover with connectors for 2.5-inch drives for the p270
7068Top cover with connectors for 1.8-inch drives for the p270
7067Top cover for no drives on the p270


Internal tape drives

The compute node does not support an internal tape drive. However, it can be attached to external tape drives by using USB or Fibre Channel connectivity. Supported external backup units are listed in the IBM Flex System Interoperability Guide, which can be found at http://www.redbooks.ibm.com/fsig.


Optical drives

The compute node does not support an internal optical drive option. However, you can connect an external USB optical drive. Supported external optical drives are listed in the IBM Flex System Interoperability Guide, which can be found at http://www.redbooks.ibm.com/fsig.

I/O architecture

The p270 has two I/O expansion connectors for attaching I/O adapters. All I/O adapters are the same shape and can be used in any location.

The following figure show the location of the I/O adapters in the p270.

Figure 8. Location of the I/O adapter slots in the IBM Flex System p270 Compute Node
Figure 5. Location of the I/O adapter slots in the IBM Flex System p270 Compute Node

All I/O adapters are the same form factor and can be used in any available slot. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in the following table. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.

Table 7. Adapter to I/O bay correspondence

I/O adapter slot
in the compute node
Port on the adapter*
Corresponding I/O module bay
in the chassis
Bay 1Bay 2Bay 3Bay 4
Slot 1Port 1YesNoNoNo
Port 2NoYesNoNo
Port 3 (for 4- and 8-port cards)YesNoNoNo
Port 4 (for 4- and 8-port cards)NoYesNoNo
Port 5 (for 8-port cards)YesNoNoNo
Port 6 (for 8-port cards)NoYesNoNo
Port 7 (for 8-port cards)**YesNoNoNo
Port 8 (for 8-port cards)**NoYesNoNo
Slot 2Port 1NoNoYesNo
Port 2NoNoNoYes
Port 3 (for 4- and 8-port cards)NoNoYesNo
Port 4 (for 4- and 8-port cards)NoNoNoYes
Port 5 (for 8-port cards)NoNoYesNo
Port 6 (for 8-port cards)NoNoNoYes
Port 7 (for 8-port cards)**NoNoYesNo
Port 8 (for 8-port cards)**NoNoNoYes
* The use of adapter ports 3, 4, 5, and 6 require upgrades to the switches, as described in Table 4. The EN4091 Pass-thru supports only ports 1 and 2 (and only when two Pass-thru modules are installed).
** Adapter ports 7 and 8 are reserved for future use. The chassis supports all eight ports but there are no switches that are available that connect to these ports.

The following figure shows the location of the switch bays in the IBM Flex System Enterprise Chassis.

Location of the switch bays in the NGP Enterprise Chassis
Figure 6. Location of the switch bays in the IBM Flex System Enterprise Chassis

The following figure shows how two-port adapters are connected to switches installed in the chassis.

Logical layout of the interconnects between I/O adapters and I/O modules
Figure 7. Logical layout of the interconnects between I/O adapters and I/O modules

Network adapters

The compute node does not have any networking integrated on the system, allowing flexibility. The following table lists the supported network adapters and which slots each is supported in.

Table 8. Network adapters

Feature
code
DescriptionSlot 1Slot 2
10 Gb Ethernet
EC24IBM Flex System CN4058 8-port 10Gb Converged Adapter (2 ASICs)YesYes
EC26IBM Flex System EN4132 2-port 10Gb RoCE Adapter (1 ASIC)NoYes
1762IBM Flex System EN4054 4-port 10Gb Ethernet Adapter (2 ASICs)YesYes
1 Gb Ethernet
1763IBM Flex System EN2024 4-port 1Gb Ethernet Adapter (2 ASICs)YesYes
InfiniBand
1761IBM Flex System IB6132 2-port QDR InfiniBand Adapter (1 ASIC)NoYes

When adapters are installed in slots, ensure that compatible switches are installed in the corresponding bays of the chassis:
  • IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch (#ESW2)
  • IBM Flex System Fabric EN4093R 10Gb Scalable Switch (#ESW7)
  • IBM Flex System Fabric EN4093 10Gb Scalable Switch (#3593)
  • IBM Flex System EN4091 10Gb Ethernet Pass-thru (#3700)
  • IBM Flex System EN2092 1Gb Ethernet Scalable Switch (#3598)
  • IBM Flex System Fabric SI4093 System Interconnect Module (#ESWA)
  • IBM Flex System IB6131 InfiniBand Switch (#3699)
  • IBM Flex System EN6131 40 Gb Ethernet Switch (#ESW6)

For compatibility information, see the IBM Flex System Interoperability Guide, which can be found at http://www.redbooks.ibm.com/fsig.

Storage host bus adapters

The following table lists the storage HBAs that are supported by the compute node.

Table 9. Storage adapters

Feature
code
DescriptionSlot 1Slot 2
Fibre Channel
1764IBM Flex System FC3172 2-port 8Gb FC Adapter (1 ASIC)NoYes
EC23IBM Flex System FC5052 2-port 16Gb FC Adapter (1 ASIC)NoYes
EC2EIBM Flex System FC5054 4-port 16Gb FC Adapter (2 ASICs)YesYes

When adapters are installed in slots, ensure that compatible switches are installed in the corresponding bays of the chassis:
  • IBM Flex System FC3171 8Gb SAN Switch (#3595)
  • IBM Flex System FC3171 8Gb SAN Pass-thru (#3591)
  • IBM Flex System FC5022 16Gb SAN Scalable Switch (#3770)

For compatibility information, see the IBM Flex System Interoperability Guide, which can be found at http://www.redbooks.ibm.com/fsig.

Power supplies

The compute node power is derived from the power supplies that are installed in the chassis. There are no options regarding power supplies. Support for the p270 might be affected by the power supply that is installed in the chassis, as described in the "Chassis support" section.

Integrated virtualization

The compute node supports PowerVM virtualization capabilities for AIX, IBM i, and Linux environments. PowerVM contains the following features:

  • Support for up to 480 virtual servers (or logical partitions, LPARs):
  • Role-based access control (RBAC)

    RBAC brings an added level of security and flexibility to the administration of the Virtual I/O Server (VIOS), a part of PowerVM. With RBAC, you can create a set of authorizations for the user management commands. You can assign these authorizations to a role named UserManagement, and this role can be given to any other user. So, one user with this role can manage the users on the system, but will not have any further access. With RBAC, the VIOS can split management functions that presently can be done only by the padmin user, providing better security by giving only the necessary access to users, and easy management and auditing of system functions.

  • Suspend/resume

    Using suspend/resume, you can provide long-term suspension (greater than 5 - 10 seconds) of partitions, saving the partition state (that is, memory, NVRAM, and VSP state) on persistent storage. This makes server resources available that were in use by that partition, restoring the partition state to server resources, and resuming the operation of that partition and its applications, either on the same server or, with PowerVM Enterprise and VMControl Enterprise, on another server.

  • Shared storage pools

    VIOS allows the creation of storage pools that can be accessed by VIOS partitions that are deployed across multiple IBM Power Systems™ servers. Therefore, an assigned allocation of storage capacity can be efficiently managed and shared. Up to four systems can participate in a Shared Storage Pool configuration. This can improve efficiency, agility, scalability, flexibility, and availability.

    The Storage Mobility feature allows data to be moved to new storage devices within Shared Storage Pools, while the virtual servers remain completely active and available. The VM Storage Snapshots/Rollback feature allows multiple point-in-time snapshots of individual virtual server storage, and these copies can be used to quickly roll back a virtual server to a particular snapshot image. The VM Storage Snapshots/Rollback functionality can be used to capture a VM image for cloning purposes or before applying maintenance.

  • Thin provisioning

    VIOS supports highly efficient storage provisioning, where virtualized workloads in VMs can have storage resources from a shared storage pool dynamically added or released, as required.

  • Network node balancing for redundant Shared Ethernet Adapters (SEAs)

    This is a useful function when multiple VLANs are supported in a dual VIOS environment. The implementation is based on a more granular treatment of trunking, where there are different trunks that are defined for the SEAs on each VIOS. Each trunk serves different VLANs, and each VIOS can be the primary for a different trunk. This occurs with just one SEA definition on each VIOS.


Light path diagnostics panel

For quick problem determination when you are physically at the compute node, the compute node offers a three-step guided path:

  1. The fault LED on the front panel
  2. The light path diagnostics panel, shown in Figure 12
  3. LEDs next to key components on the system board

The light path diagnostics panel is visible when you remove the compute node from the chassis. The panel is at the upper right of the compute node, as shown in the following figure .

Location of light path diagnostics panel
Figure 8. Location of the light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the compute node is removed from the chassis.

The meanings of the LEDs in the light path diagnostics panel are listed in the following table.

Table 10. Light path diagnostic panel LEDs
LEDMeaning
LPThe light path diagnostics panel is operational.
S BRDA system board error is detected.
MGMTThere is an error with the FSP.
D BRDThere is a fault with the disk drive board.
DRV 1There is a drive 1 fault.
DRV 2There is a drive 2 fault.
ETEA fault has been detected with the IBM Flex System Dual VIOS Adapter that is installed in the expansion socket.

If problems occur, the light path diagnostics LEDs assist you in identifying the subsystem that is involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. This temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts towards a resolution.

Typically, an administrator has obtained this information from the IBM Flex System Manager or Chassis Management Module before removing the node, but having the LEDs helps with repairs and troubleshooting if onsite assistance is needed.

Supported operating systems

The p270 Compute Node supports the following operating systems:

  • AIX V7.1 with the 7100-02 Technology Level with Service Pack 3
  • AIX V6.1 with the 6100-08 Technology Level, with Service Pack 3
  • VIOS 2.2.2.3 or later
  • IBM i 6.1 with i 6.1.1 -K machine code, or later
  • IBM i 7.1 TR6 or later
  • SUSE Linux Enterprise Server 11 Service Pack 2 for POWER
  • Red Hat Enterprise Linux 6.4 for POWER

Note: Support for some of these operating system versions will be post generally availability. For the latest information about the specific versions and service levels that supported and any other prerequisites, see the IBM Fix Level Recommendation Tool website found at:
http://www14.software.ibm.com/support/customercare/flrt/

Physical specifications

Dimensions and weight of the p270:

Width: 215 mm (8.5”)
Height: 51 mm (2.0”)
Depth: 493 mm (19.4”)
Maximum configuration: 7.7 kg (17.0 lb)

Supported environment

The IBM Flex System p270 Compute Node and the IBM Flex System Enterprise Chassis comply with ASHRAE Class A3 specifications.

This is the supported operating environment:

  • 5 - 40 °C (41 - 104 °F) at 0 - 914 m (0 - 3,000 ft)
  • 5 - 28 °C (41 - 82 °F) at 914 - 3,050 m (3,000 - 10,000 ft)
  • Relative humidity: 8 - 85%
  • Maximum altitude: 3,050 m (10,000 ft)

Warranty options

The IBM Flex System p270 Compute Node has a three-year onsite warranty with 9x5 next-business-day terms. IBM offers the warranty service upgrades through IBM ServicePac. The IBM ServicePac is a series of prepackaged warranty maintenance upgrades and post-warranty maintenance agreements with a well-defined scope of services, including service hours, response time, term of service, and service agreement terms and conditions.

IBM ServicePac offerings are country-specific. Each country might have its own service types, service levels, response times, and terms and conditions. Not all covered types of ServicePac might be available in a particular country. For more information about the IBM ServicePac offerings that are available in your country, visit the IBM ServicePac Product Selector at https://www-304.ibm.com/sales/gss/download/spst/servicepac.

The following table explains the warranty service definitions in more detail.

Table 11. Warranty service definitions

TermDescription
IBM onsite repair (IOR)A service technician comes to the server's location for equipment repair.
24x7x2 hourA service technician is scheduled to arrive at your customer’s location within two hours after remote problem determination is completed. We provide 24-hour service, every day, including IBM holidays.
24x7x4 hourA service technician is scheduled to arrive at your customer’s location within four hours after remote problem determination is completed. We provide 24-hour service, every day, including IBM holidays.
9x5x4 hourA service technician is scheduled to arrive at your customer’s location within four business hours after remote problem determination is completed. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays. If after 1:00 p.m. it is determined that onsite service is required, the customer can expect the service technician to arrive the morning of the following business day. For noncritical service requests, a service technician will arrive by the end of the following business day.
9x5 next business dayA service technician is scheduled to arrive at your customer’s location on the business day after we receive your call, following remote problem determination. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays.

In general, these are the types of IBM ServicePac warranty and maintenance service upgrades:
  • One, two, three, four, or five years of 9x5 or 24x7 service coverage
  • Onsite repair from next-business-day to four or two hours
  • One or two years of warranty extension

Regulatory compliance

The compute node conforms to the following standards:

  • ASHRAE Class A3
  • US: FCC - Verified to comply with Part 15 of the FCC Rules Class A
  • Canada: ICES-004, issue 3 Class A
  • EMEA: EN55022: 2006 + A1:2007 Class A
  • EMEA: EN55024: 1998 + A1:2001 + A2:2003
  • Australia and New Zealand: CISPR 22, Class A
  • US: (UL Mark) UL 60950-1 1st Edition
  • CAN: (cUL Mark) CAN/CSA22.2 No.60950-1 1st Edition
  • Europe: EN 60950-1:2006+A11:2009
  • CB: IEC60950-1, 2nd Edition
  • Russia: (GOST Mark) IEC60950-1

Related publications and links

For more information, see the following resources:


Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.

Profile

Publish Date
06 August 2013

Last Update
26 December 2013


Rating: Not yet rated


Author(s)

IBM Form Number
TIPS1018