Power7 System Firmware

Applies to: 8412-EAD; 9117-MMB; 9117-MMD; 9179-MHB and 9179-MHD

This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware.


Contents


1.0 Systems Affected

This package provides firmware for Power 770 (9117-MMB, 9117-MMD)  and Power 780 (8412-EAD, 9179-MHB, 9179-MHD)  servers only.

The firmware level in this package is:

1.1 Minimum HMC Code Level

This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed.

The Minimum HMC Code level for this firmware is:  HMC V7 R7.9.0 (PTF MH01405) with mandatory efix (PTF MH01406).

Although the Minimum HMC Code level for this firmware is listed above,  HMC V7 R7.9.0 (PTF MH01405) with mandatory efix (PTF MH01406) and security fix (PTF MH01435), or higher is recommended.

Note: Upgrading the HMC to V7R7.9.0 is required prior to installing this firmware. This is due to the firmware containing support for Single Root I/O Virtualization (SR-IOV) adapters. An SR-IOV adapter can be configured in shared mode and be shared by multiple logic partitions at the same time. HMC supports the configuration of the logical ports assigned to partitions and supports the configuration, backup, and restore of the adapter and physical port properties.

For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central.
http://www-933.ibm.com/support/fixcentral/

For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT):
http://www14.software.ibm.com/webapp/set2/flrt/home

NOTE: You must be logged in as hscroot in order for the firmware installation to complete correctly.

2.0 Important Information

Downgrading firmware from any given release level to an earlier release level is not recommended.
If you feel that it is necessary to downgrade the firmware on your system to an earlier release level, please contact your next level of support.

Concurrent Firmware Updates

Concurrent system firmware update is only supported on HMC Managed Systems only.

Memory Considerations for Firmware Upgrades

Firmware Release Level upgrades and Service Pack updates may consume additional system memory.
Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors.
Factors influencing server firmware memory requirements include the following:
Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations.

Additional information can be found at:
  http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/p7hat/iphatlparmemory.htm


3.0 Firmware Information and Description

Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.

For systems that are not managed by an HMC, the installation of system firmware is always disruptive.

Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed.  Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document. For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred.

Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released.

System firmware file naming convention:

01AMXXX_YYY_ZZZ

NOTE: Values of service pack and last disruptive service pack level (YYY and ZZZ) are only unique within a release level (XXX). For example, 01AM720_067_045 and 01AM740_067_053 are different service packs.

An installation is disruptive if:

Example: Currently installed release is AM710, new release is AM720 Example: AM720_120_120 is disruptive, no matter what level of AM720 is currently installed on the system Example: Currently installed service pack is AM720_120_120 and new service pack is AM720_152_130

An installation is concurrent if:

The release level (XXX) is the same, and
The service pack level (YYY) currently installed on the system is the same or higher than the last disruptive service pack level (ZZZ) of the service pack to be installed.

Example: Currently installed service pack is AM720_126_120,  new service pack is AM720_143_120.

Firmware Information and Update Description

 
Filename Size Checksum
01AM780_059_040.rpm 46344804 57477

Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed).
ie: sum 01AM780_059_040.rpm

AM780
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs

The complete Firmware Fix History for this Release Level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/AM-Firmware-Hist.html

AM780_059_040 / FW780.11

06/23/14
Impact:  Security      Severity:  HIPER

System firmware changes that affect all systems

  • HIPER/Pervasive:  A security problem was fixed in the OpenSSL (Secure Socket Layer) protocol that allowed clients and servers, via a specially crafted handshake packet, to use weak keying material for communication.  A man-in-the-middle attacker could use this flaw to decrypt and modify traffic between the management console and the service processor.  The Common Vulnerabilities and Exposures issue number for this problem is CVE-2014-0224.
  • HIPER/Pervasive:  A security problem was fixed in OpenSSL for a buffer overflow in the Datagram Transport Layer Security (DTLS) when handling invalid DTLS packet fragments.  This could be used to execute arbitrary code on the service processor.  The Common Vulnerabilities and Exposures issue number for this problem is CVE-2014-0195.
  • HIPER/Pervasive:  Multiple security problems were fixed in the way that OpenSSL handled read and write buffers when the SSL_MODE_RELEASE_BUFFERS mode was enabled to prevent denial of service.  These could cause the service processor to reset or unexpectedly drop connections to the management console when processing certain SSL commands.  The Common Vulnerabilities and Exposures issue numbers for these problems are CVE-2010-5298 and CVE-2014-0198.
  • HIPER/Pervasive:  A security problem was fixed in OpenSSL to prevent a denial of service when handling certain Datagram Transport Layer Security (DTLS) ServerHello requests. A specially crafted DTLS handshake packet could cause the service processor to reset.  The Common Vulnerabilities and Exposures issue number for this problem is CVE-2014-0221.
  • HIPER/Pervasive:  A security problem was fixed in OpenSSL to prevent a denial of service by using an exploit of a null pointer de-reference during anonymous Elliptic Curve Diffie Hellman (ECDH) key exchange.  A specially crafted handshake packet could cause the service processor to reset.  The Common Vulnerabilities and Exposures issue number for this problem is CVE-2014-3470.
  • A  security problem was fixed in the service processor TCP/IP stack to discard illegal TCP/IP packets that have the SYN and FIN flags set at the same time.  An explicit packet discard was needed to prevent further processing of the packet that could result in an bypass of the iptables firewall rules.
AM780_056_040 / FW780.10

04/25/14
Impact: Serviceability         Severity:  SPE

New Features and Functions

  • Support for the 9117-MMD, 9179-MHD and 8412-EAD systems.
  • Support was added to the Virtual I/O Server (VIOS) for shared storage pool mirroring (RAID-1) using the virtual SCSI (VSCSI) storage adapter to provide redundancy for data storage.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support was added to the Management Console command line to allow configuring a shared control channel for multiple pairs of Shared Ethernet Adapters (SEAs).  This simplifies the control channel configuration to reduce network errors when the SEAs are in fail-over mode.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support was added for Single Root I/O Virtualization (SR-IOV) that enables the hypervisor to share a SR-IOV-capable PCI-Express adapter across multiple partitions. The SR-IOV mode is supported for the following Ethernet Network Interface Controller (NIC) I/O adapters (SR-IOV supported in both native mode and through VIOS):
    -   F/C EN10 and CCIN 2C4C - Integrated Multi-function Card with Dual 10Gb Ethernet RJ45 and Copper Twinax
    -   F/C EN11 and CCIN 2C4D - Integrated Multi-function Card with Dual 10Gb Ethernet RJ45 and Short Range (SR) Optical
    -   F/C EN0H and CCIN 2B93 - PCI Express Generation 2 (PCIe2)  2x10Gb FCoE 2x1Gb Ethernet SFP+ Adapter
    -   F/C EN0K and CCIN 2CC1 - PCI Express Generation 2 (PCIe2)  4-port (10Gb FCoE & 1Gb Ethernet) SFP+Copper and RJ45
    System firmware updates the adapter firmware level on these adapters to 1.1.58.4 when a supported adapter is placed into SR-IOV mode.
    The SR-IOV mode for Ethernet NIC is supported on the following OS levels:
    -   AIX 6.1Y TL3 SP2, or later
    -   AIX 7.1N TL3 SP2, or later
    -   IBMi 7.1 with TR8, or later
    -   SUSE Linux Enterprise Server 11 SP3
    -   Red Hat Enterprise Linux 6.5
    -   VIOS 2.2.3.2, or later
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support was added to the Advanced System Management Interface (ASMI) to provide a menu for "Power Supply Idle Mode".  Using the "Power Supply Idle Mode"  menu, the power supplies can be either set enabled to save power by idling power supplies when possible or set disabled to keep all power supplies fully on and allow a balanced load to be maintained on the power distribution units (PDUs) of the system.  Power supply idle mode enabled helps to reduce overall power usage when the system load is very light by having one power supply deliver all the power while the second power supply is maintained in a low power state.  All power supplies must be present and have support for power supply idle mode before power supply mode can be enabled.
    Power Supply Idle Mode is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support was added for monitored compliance of the Power Integrated Facility for Linux (IFL).  IFL is an optional lower cost per processor core activation for Linux-only workloads on IBM Power Systems.  Power IFL processor cores can be activated that are restricted to running Red Hat Linux or SUSE linux.  In contrast, processor cores that are activated for general-purpose workloads can run any supported operating system.  Power IFL processor cores are enabled by feature code ELJ1 using Capacity Upgrade on Demand (CUoD).  Linux partitions can use IFL processors and the other processor cores but AIX and IBM i5/OS cannot use the IFL processors.  The IFL monitored compliance process will send customer alert messages to the management console if the system is out of compliance for the number of IFL processors and general-purpose workload processors that are in active use compared to the number that have been licensed.
    Power IFL and monitored compliance is not supported on IBM Power ESE (8412-EAD) system because it has the AIX operating system only.
  • System recovery for interrupted AC power and Voltage Regulator Module (VRM) failures has been enhanced for systems with multiple CEC enclosures such that a power AC or VRM fault on one CEC drawer will no longer block the other CEC drawers from powering on.  Previously, all CEC enclosures in a system needed valid AC power before the power on of the system could proceed.
    This system recovery feature does not pertain to the IBM Power ESE (8412-EAD) system because it is a single CEC enclosure system.
  • Support for IBM PCIe 3.0 x8 dual 4-port SAS RAID adapter with 12 GB cache with feature code EJ0L and CCIN 57CE.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support was added to the Management Console and the Virtual I/O Server (VIOS) to provide the capability to to enable and disable individual virtual ethernet adapters from the management console.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support was added for the IBM Flash Adapter 90 (#ES09)  PCIe 2.0 x8 with 0.9TB of usable enterprise multi-level cell (eMLC) flash memory .  The system recognizes the PCI device as a high power device needing additional cooling and increases the fan speeds accordingly.  This flash feature also provides:
        -  Up to 325K read IOPs and less than 100 micro second latency.
        -  Four independent flash controllers.
        -  Capacitive emergency power loss protection.
        -  Half-length, full-height PCIe card form factor.
    The IBM Flash Adapter 90 is not included in base AIX installation media.  AIX feature support can be acquired at IBM Fix Central: http://www-933.ibm.com/support/fixcentral/ by selecting the Product Group System Storage.  This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support for Management Console logical partition Universally Unique IDs (UUIDs) so that the HMC preserves the UUID for logical partitions on backup/restore and migration.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support for IBM PCIe 3.0 x8 non-caching 2-port SAS RAID adapter with feature code EJ0J and CCIN 57B4.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support for Power Enterprise System Pools allows for the aggregation of Capacity on Demand (CoD) resources, including processors and memory, to be moved from one pool server to any other pool server as needed.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support for a Management Console Performance and Capacity Monitor (PCM) function to monitor and manage both physical and virtual resources.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • Support for virtual server network (VSN) Phase 2 that delivers IEEE standard 802.1Qbg based on Virtual Ethernet Port Aggregator (VEPA) switching.  This supports the Management Console assignment of the VEPA switching mode to virtual Ethernet switches used by the virtual Ethernet adapters of the logical partitions.  The server properties in the Management Console will show the capability "Virtual Server Network Phase 2 Capable" as "True" for the system.
    This feature is not supported on IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.

System firmware changes that affect all systems

  • A problem was fixed that prevented a HMC-managed system from being converted to manufacturing default configuration (MDC) mode when the management console command "lpcfgop -m <server> -o clear" failed to create the default partition.  The management console went to the incomplete state for this error.
  • A problem was fixed that logged an incorrect call home B7006956 NVRAM error during a power off of the system.  This error log indicates that the NVRAM of the system is in error and will be cleared on the next IPL of the system.  However, there is no NVRAM error and the error log was created because a reset/reload of the service processor occurred during the power off.
  • Help text for the Advanced System Management Interface (ASMI) "System Configuration/Hardware Deconfiguration/Clear All Deconfiguration Errors" menu option was enhanced to clarify that when selecting "Hardware Resources" value of "All hardware resources", the service processor deconfiguration data is not cleared.   The "Service processor" must be explicitly selected for that to be cleared.
  • A firmware code update problem was fixed that caused the Hardware Management Console (HMC) to go to "Incomplete State" for the system with SRC E302F880 when assignment of a partition universal unique identifier (UUID) failed for a partition that was already running.  This problem happens for disruptive code updates from pre-770 levels to 770 or later levels.
  • A problem was fixed that caused frequent SRC B1A38B24 error logs with a call home every 15 seconds when service processor network interfaces were incorrectly configured on the same subnet.  The frequency of the notification of the network subnet error has been reduced to once every 24 hours.
  • A problem was fixed that caused a memory clock failure to be called out as failure in the processor clock FRU.
  • A problem was fixed where a 12V DC power-good (pGood) input fault was reported as a SRC 11002620 with the wrong FRU callout of Un-P1 for system backplane.  The FRU callout for SRC 11002620 has been corrected to Un-P2 for I/O card.
  • A problem was fixed that prevented guard error logs from being reported for FRUs that were guarded during the system power on.  This could happen if the same FRU had been previously reported as guarded on a different power on of the system.  The requirement is now met that guarded FRUs are logged on every power on of the system.
  • A problem was fixed for the Advanced System Management Interface (ASMI) "Login Profile/Change Password" menu where ASMI would fail with "Console Internal Error, status code 500" displayed on the web browser when an incorrect current password was entered.
  • A problem was fixed for a system with pool resources for a resource remove operation that caused the number of unreturned resources to become incorrect.  This problem occurred if the system first became out of compliance with overdue unreturned resources and then another remove of a pool resources from the server was attempted.
  • A problem was fixed for the Advanced System Management Interface (ASMI)  "System Information/Firmware Maintenance History" menu option on the service processor to display the firmware maintenance history instead of the message  "No code update history log was found".
  • A problem was fixed for a Live Partition Mobility (LPM) suspend and transfer of a partition that caused the time of day to skip ahead to an incorrect value on the target system.  The problem only occurred when a suspended partition was migrated to a target CEC that had a hypervisor time that was later than the source CEC.
  • A problem was fixed for IBM Power Enterprise System Pools that prevented the management console from changing from the backup to the master role for the enterprise pool.  The following error message was displayed on management console:  "HSCL90F7 An internal error occurred trying to set a new master management console for the Power enterprise pool. Try the operation again.  If this error persists, contact your service representative."
    This defect does not pertain to the IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • A problem was fixed for Live Partition Mobility (LPM) where a 2x performance decrease occurs during the resume phase of the migration when migrating from a system with 780 or later firmware back to a system with a pre-780 level of firmware.

System firmware changes that affect certain systems

  • On systems with multiple CEC drawers or nodes, a problem was fixed in the service processor Advanced System Management Interface (ASMI) performance dump collection that only allowed performance data to be collected for the first node of the system.  The  "System Service Aids/Performance Dump" menu of the ASMI is used to work with the performance dump.
  • On systems involved in a series of consecutive Live Partition Mobility (LPM) operations, a memory leak problem was fixed in the run time abstraction service (RTAS) that caused a partition run time AIX crash with SRC 0c20.  Other possible symptoms include error logs with SRC BA330002 (RTAS memory allocation failure).
  • On systems running Dynamic Platform Optimizer (DPO) with one or more unlicensed processors, a problem was fixed where the system performance was significantly degraded during the DPO operation.  The amount of performance degradation was more for systems with larger numbers of unlicensed processors.
  • On systems with a redundant service processor, a problem was fixed where the service processor allowed a clock failover to occur without a SRC B158CC62 error log and without a hardware deconfiguration record for the failed clock source.  This resulted in the system running with only one clock source and without any alerts to warn that clock redundancy had been lost.
  • DEFERRED:  On systems with a redundant service processor, a problem was fixed that caused a system termination with SRC B158CC62 during a clock failover initiated by certain types of clock card failures.  This deferred fix addresses a problem that has a very low probability of occurrence.  As such customers may wait for the next planned service window to activate the deferred fix via a system reboot.
    This problem does not pertain to IBM Power 770 (9117-MMB) and IBM Power 780 (9179-MHB) systems.
  • On systems with a management console and service processors configured with Internet Protocol version 6 (IPv6) addresses,  a problem was fixed that prevented the management console from discovering the service processor.  The Service Location Protocol (SLP) on the service processor was not being enabled for IPv6, so it was unable to respond to IPv6 queries.
  • On systems with a F/C 5802 or 5877 I/O drawer installed, a problem was fixed that occurred during Offline Converter Assembly (OCA) replacement operations. The fix prevents a false  Voltage Regulator Module (VRM) fault and the logging of SRCs 10001511 or 10001521 from occurring.    This resulted in the OCA LED getting stuck in an on or "fault" state and the OCA not powering on.
  • On systems with one memory clock deconfigured, a problem was fixed where the system failed to IPL using the second memory clock with SRCs B158CC62 and B181C041 logged.
  • On systems that require in-band flash to update system firmware, a problem was fixed so in-band update would not fail if the Permanent (P) or the Temporary (T) side of the service processor was marked invalid.   Attempting to in-band flash from the AIX or Linux command line failed with a BA280000 log reported.  Attempting to in-band flash from the AIX diagnostics menus also failed because the flash menu options did not appear in this case.
  • On a system with a partition with a AIX and Linux boot source to support dual booting, a problem was fixed that caused the Host Ethernet Adapter (HEA) to be disabled when rebooting from Linux to AIX.  Linux had disabled interrupts for the HEA on power down, causing an error for AIX when it tried to use the HEA to access the network.
  • On a system with a disk device with multiple boot partitions, a problem was fixed that caused System Management Services (SMS) to list only one boot partition.  Even though only one boot partition was listed in SMS, the AIX bootlist command could still be used to boot from any boot partition.
  • On systems with a redundant service processor with AC power missing to the node containing the anchor card, a problem was fixed that caused an IPL failure with SRC B181C062 when the anchor card could not be found in the vital product data (VPD) for the system.  With the fix, the system is able to find the anchor card and IPL since the anchor card gets its power from the service processor cable, not from the node where it resides.

Concurrent hot add/repair maintenance firmware fixes

  • On a system with sixteen or more logical partitions, a problem was fixed for a memory relocation error during concurrent hot node repair that caused a hang or a failure.  The problem can also be triggered by mirrored memory defragmentation on a system with selective memory mirroring.
AM780_054_040 / FW780.02

04/18/14
Impact: Security         Severity:  HIPER

System firmware changes that affect all systems
  • HIPER/Pervasive:  A  security problem was fixed in the OpenSSL Montgomery ladder implementation for the ECDSA (Elliptic Curve Digital Signature Algorithm) to protect sensitive information from being obtained with a flush and reload cache side-channel attack to recover ECDSA nonces from the service processor.  The Common Vulnerabilities and Exposures issue number is CVE-2014-0076.  The stolen ECDSA nonces could be used to decrypt the SSL sessions and compromise the Hardware Management Console (HMC) access password to the service processor.  Therefore, the HMC access password for the managed system should be changed after applying this fix.
  • HIPER/Pervasive:  A  security problem was fixed in the OpenSSL Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) to not allow Heartbeat Extension packets to trigger a buffer over-read to steal private keys for the encrypted sessions on the service processor.  The Common Vulnerabilities and Exposures issue number is CVE-2014-0160 and it is also known as the heartbleed vulnerability.  The stolen private keys could be used to decrypt the SSL sessions and and compromise the Hardware Management Console (HMC) access password to the service processor.  Therefore, the HMC access password for the managed system should be changed after applying this fix.
  • A  security problem was fixed for the Lighttpd web server that allowed arbitrary SQL commands to be run on the service processor.  The Common Vulnerabilities and Exposures issue number is CVE-2014-2323.
  • A security problem was fixed for the Lighttpd web server where improperly-structured URLs could be used to view arbitrary files on the service processor.  The Common Vulnerabilities and Exposures issue number is CVE-2014-2324.
AM780_050_040 / FW780.01

03/10/14
Impact:  Data      Severity:  HIPER

System firmware changes that affect all systems

  • HIPER/Non-Pervasive:  A problem was fixed for a potential silent data corruption issue that may occur when a Live Partition Mobility (LPM) operation is performed from a system (source system) running a firmware level earlier than AH780_040 or AM780_040 to a system (target system) running AH780_040 or AM780_040.
AM780_040_040 / FW780.00

12/06/13
Impact:  New      Severity:  New

New Features and Functions

  • Support was added to upgrade the service processor to openssl version 1.0.1 and for compliance to National Institute of Standards and Technologies (NIST) Special Publications 800-131a.  SP800-131a compliance required the use of stronger cryptographic keys and more robust cryptographic algorithms.
  • Support was added to the Virtual I/O Server (VIOS) for Universal Serial Bus (USB) removable hard-disk drive (HDD) devices.
  • Support was added in Advanced System Management Interface (ASMI) to facilitate capture and reporting of debug data for system performance problems.  The  "System Service Aids/Performance Dump" menu was added to ASMI to perform this function.
  • Support was added to the Management Console for group-based LDAP authentication.
  • Partition Firmware was enhanced to to be able to recognize and boot from disks formatted with the GUID Partition Table (GPT) format that are capable of being greater than 2TB in size.  GPT is a standard for the layout of the partition table on a physical hard disk, using globally unique identifiers (GUID), that does not have the 2TB limit that is imposed by the DOS partition format.
  • The call home data for every serviceable event of the system was enhanced to include information on every guarded element (processor, memory,I/O chip, etc) and contains the part number and location codes of the FRUs and the service processor de-configuration policy settings.
  • Support for Dynamic Platform Optimizer (DPO) enhancements to show the logical partition current and potential affinity scores.  The Management Console has also been enhanced to show the partition scoring.  The operating system (OS) levels that support DPO:

                ◦ AIX 6.1 TL8 or later
                ◦ AIX 7.1 TL2 or later
                ◦ VIOS 2.2.2.0
                ◦ IBM i 7.1 PTF MF56058
                ◦ Linux RHEL7
                ◦ Linux SLES12

         Note: If DPO is used with an older version of the OS that predates the above levels, either:
                   - The partition needs to be rebooted after DPO completes to optimize placement, or
                   - The partition is excluded from participating in the DPO operation (through a command line option on the "optmem" command that is used to initiate a
                      DPO operation).

  • Support for Dynamic Platform Optimizer (DPO) on 9117-MMB annd 9179-MHB systems.
  • Support for Management Console command line to configure the ECC call home path for SSL proxy support.
  • Support for Management Console to minimize recovery state problems by using the hypervisor and VIOS configuration data to recreate partition data when needed.
  • Support for Management Console to provide scheduled operations to check if the partition affinity falls below a threshold and alert the user that Dynamic Platform Optimizer (DPO) is needed.
  • Support for enhanced platform serviceability to extend call home to include hardware in need of repair and to issue periodic service events to remind of failed hardware.
  • Support for Virtual I/O Server (VIOS) to support 4K block size DASD as a virtual device.
  • Support for performance improvements for concurrent Live Partition Mobility (LPM) migrations.
  • Support for Management Console to handle all Virtual I/O Server (VIOS) configuration tasks and provide assistance in configuring partitions to use redundant VIOS.
  • Support for Management Console to maintain a profile that is synchronized with the current configuration of the system, including Dynamic Logical Partitioning (DLPAR) changes.
  • Support for Virtual I/O Server (VIOS) for an IBMi client data connection to a SIS64 device driver backed by VSCSI physical volumes.
  • Support was dropped for Secured Socket Layer (SSL) protocol version 2 and SSL weak and medium cipher suites in the service processor web server (Ligthttpd) .  Unsupported web browser connections to the Advanced System Management Interface (ASMI) secured port 443 (using https://) will now be rejected if those browsers do not support SSL version 3.  Supported web browsers for Power7 ASMI are Netscape (version 9.0.0.4), Microsoft Internet Explorer (version 7.0), Mozilla Firefox (version 2.0.0.11), and Opera (version 9.24).
  • Support was added in Advanced System Management Interface (ASMI) "System Configuration/Firmware Update Policy" menu to detect and display the appropriate Firmware Update Policy (depending on whether system is HMC managed) instead of requiring the user to select the Firmware Update Policy.  The menu also displays the "Minimum Code Level Supported" value.

System firmware changes that affect all systems

  • A problem was fixed that caused a service processor OmniOrb core dump with SRC B181EF88 logged.
  • A problem was fixed that caused the system attention LED to stay lit when a bad FRU was replaced.
  • A problem was fixed that caused a memory leak of 50 bytes of service processor memory for every call home operation.  This could potentially cause an out of memory condition for the service processor when running over an extended period of time without a reset.
  • A problem was fixed that caused a L2 cache error to not guard out the faulty processor, allowing the system to checkstop again on an error to the same faulty processor.
  • A problem was fixed that caused a HMC code update failure for the FSP on the accept operation with SRC B1811402 or FSP is unable to boot on the updated side.
  • A problem was fixed that caused a system checkstop during hypervisor time keeping services.
  • A problem was fixed that caused a built-in self test (BIST) for GX slots to create corrupt error log values that core dumped the service processor with a B18187DA.  The corruption was caused by a failure to initialize the BIST array to 0 before starting the tests.
  • The Hypervisor was enhanced to allow the system to continue to boot using the redundant Anchor (VPD) card, instead of stopping the Hypervisor boot and logging SRC B7004715,  when the primary Anchor card has been corrupted.
  • A problem was fixed with the Dynamic Platform Optimizer (DPO) that caused memory affinity to be incorrectly reported to the partitions before the memory was optimized.   When this occurs, the performance is impacted over what would have been gained with the optimized memory values.
  • A problem was fixed that caused a migrated partition to reboot during transfer to a VIOS 2.2.2.0, and later, target system. A manual reboot would be required if transferred to a target system running an earlier VIOS release. Migration recovery may also be necessary.
  • A problem was fixed that can cause Anchor (VPD) card corruption and  A70047xx SRCs to be logged.  Note: If a serviceable event  with SRC A7004715 is present or was logged previously, damage to the VPD card may have occurred. After the fix is applied, replacement of the Anchor VPD  card is recommended in order to restored full redundancy.
  • The firmware was enhanced to display on the management console the correct number of concurrent Live Partition Mobility (LPM) operations that is supported.
  • A problem was fixed that caused a 1000911E platform event log (PEL) to be marked as not call home.  The PEL is now a call home to allow for correction.  This PEL is logged when the hypervisor has changed the Machine Type Model Serial Number (MTMS) of an external enclosure to UTMP.xxx.xxxx because it cannot read the vital product data (VPD), or the VPD has invalid characters, or if the MTMS is a duplicate to another enclosure
  • A problem was fixed that caused the state of the Host Ethernet Adapter (HEA) port to be reported as down when the physical port is actually up.
  • When powering on a system partition, a problem was fixed that caused the partition universal unique identifier (UUID) to not get assigned, causing a B2006010 SRC in the error log.
  • For the sequence of a reboot of a system partition followed immediately by a power off of the partition, a problem was fixed where the hypervisor virtual service processor (VSP) incorrectly retained locks for the powered off partition, causing the CEC to go into recovery state during the next power on attempt.
  • A problem was fixed that caused an error log generated by the partition firmware to show conflicting firmware levels.  This problem occurs after a firmware update or a Live Partition Mobility (LPM) operation on the system.
  • A problem was fixed that caused the system attention LED to be lit without a corresponding SRC and error log for the event.  This problem typically occurs when an operating system on a partition terminates abnormally.
  • A problem was fixed that caused the slot index to be missing for virtual slot number 0 for the dynamic reconfiguration connector (DRC) name for virtual devices.  This error was visible from the management console when using commands such as "lshwres -r virtualio --rsubtype slot -m machine" to show the hardware resources for virtual devices.
  • A problem was fixed that caused a system checkstop with SRC B113E504 for a recoverable hardware fault.
  • A problem was fixed during resource dump processing that caused a read of an invalid system memory address and a SRC B181C141.  The invalid memory reference resulted from the service processor incorrectly referencing memory that had been relocated by the hypervisor.

System firmware changes that affect certain systems

  • A problem was fixed that caused fans to increase to maximum speeds with SRC B130B8AF logged as a result of thermal sensors with calibration errors.
  • On systems with an I/O tower attached, a problem was fixed that caused multiple service processor reset/reloads if the tower was continuously sending invalid System Power Control Network (SPCN) status data.
  • On systems with a redundant service processor, a problem was fixed that caused fans to run at a high-speed after a failover to the sibling service processor.
  • On systems with a F/C 5802 or 5877 I/O drawer installed, the firmware was enhanced to guarantee that an SRC will be generated when there is a power supply voltage fault.  If no SRC is generated, a loss of power redundancy may not be detected, which can lead to a drawer crash if the other power supply goes down.  This also fixes a problem that causes an 8 GB Fiber channel adapter in the drawer to fail if the 12V level fails in one Offline Converter Assembly (OCA).
  • On systems managed by an HMC with a F/C 5802 or 5877 I/O drawer installed, a problem was fixed that caused the hardware topology on the management console for the managed system to show "null" instead of "operational" for the affected I/O drawers.
  • On systems with a redundant service processor, a problem was fixed that caused a guarded sibling service processor deconfiguration details to not be able to be shown in the Advanced System Management Interface (ASMI).
  • On systems with a redundant service processor, a problem was fixed that caused a SRC B150D15E to be erroneously logged after a failover to the sibling service processor.
  • On systems with a F/C 5802 or 5877 I/O drawer installed, a problem was fixed that where a Offline Converter Assembly (OCA) fault would appear to persist after a OCA micro-reset or OCA replacement.  The fault bit reported to the OS may not be cleared, indicating a fault still exists in the I/O drawer after it has been repaired.
  • When switching between turbocore and maxcore mode, a problem was fixed that caused the number of supported partitions to be reduced by 50%.
  • On systems in turbocore mode with unlicensed processors, a problem was fixed that caused an incorrect processor count.  The AIX command lparstat gave too high a value for "Active Physical CPUs in system" when it included unlicensed turbocore processors in the count instead of just counting the licensed processors.
  • A problem was fixed that was caused by an attempt to modify a virtual adapter from the management console command line when the command specifies it is an Ethernet adapter, but the virtual ID specified is for an adapter type other than Ethernet.  The managed system has to be rebooted to restore communications with the management console when this problem occurs; SRC B7000602 is also logged.
  • On systems running AIX or Linux, a problem was fixed that caused the operating system to halt when an InfiniBand Host Channel Adapter (HCA) adapter fails or malfunctions.
  • On systems running AIX or linux, a hang in a Live Partition Mobility (LPM) migration for remote restart-capable partitions was fixed by adding a time-out for the required paging space to become available.  If after five minutes the required paging space is not available, the start migration command returns a error code of 0x40000042 (PagingSpaceNotReady) to the management console.
  • On systems running Dynamic Platform Optimizer (DPO) with no free memory,  a problem was fixed that caused the Hardware Management System (HMC) lsmemopt command to report the wrong status of completed with no partitions affected.  It should have indicated that DPO failed due to insufficient free memory.  DPO can only run when there is free memory in the system.
  • On systems with partitions using physical shared processor pools, a problem was fix that caused partition hangs if the shared processor pool was reduced to a single processor.
  • On a system running a Live Partition Mobility (LPM) operation, a problem was fixed that caused the partition to successfully appear on the target system, but hang with a 2005 SRC.
  • A problem was fixed that caused SRC BA330000 to be logged after the successful migration of a partition running Ax740_xxx or Ax730_xxx firmware to a system running Ax760, or a later release, or firmware.  This problem can also cause SRCs BA330002, BA330003, and BA330004 to be erroneously logged over time when a partition is migrated from a system running Ax760, or a later release, to a system running Ax740_xxx or Ax730_xxx firmware.
  • On systems using IPv6 addresses, the firmware was enhanced to reduce the time it take to install an operating system using the Network Installation Manager (NIM).
  • On systems managed by a management console, a problem was fixed that caused a partition to become unresponsive when the AIX command "update_flash -s" is run.
  • On systems with turbo-core enabled that are a target of Live Partition Mobility (LPM),  a problem was fixed where cache properties were not recognized and SRCs BA280000 and BA250010 reported.

Concurrent hot add/repair maintenance firmware fixes

  • A problem was fixed that caused a concurrent hot add/repair maintenance operation to fail on an erroneously logged error for the service processor battery with  SRCs B15A3303, B15A3305, and  B181EA35 reported.
  • A problem was fixed that caused a concurrent hot add/repair maintenance operation to fail if a memory channel failure on the CEC was followed by a service processor reset/reload.
  • A problem was fixed that caused SRC B15A3303  to be erroneously logged as a predictive error on the service processor sibling after a successful concurrent repair maintenance operation for the real-time clock (RTC) battery.
  • A problem was fixed that prevented the I/O slot information from being presented on the management console after a concurrent node repair.
  • A problem was fixed that caused Capacity on Demand (COD) "Out of Compliance" messages during concurrent maintenance operations when the system was actually in compliance for the licensed amount of resources in use.


4.0 How to Determine Currently Installed Firmware Level

For HMC managed systems:  From the HMC, select Updates in the navigation (left-hand) pane, then view the current levels of the desired server(s).

Alternately, use the Advanced System Management Interface (ASMI) Welcome pane. The current server firmware  appears in the top right corner. Example: AM780_yyy.


5.0 Downloading the Firmware Package

Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages.

Note: If your HMC is not internet-connected you will need to download the new firmware level to a CD-ROM or ftp server.


6.0 Installing the Firmware

The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.

Example: AMXXX_YYY_ZZZ

Where XXX = release level


HMC Managed Systems:

Instructions for installing firmware updates and upgrades on systems managed by an HMC can be found at:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7ha1/updupdates.htm

Systems not Managed by an HMC:

Power Systems:
Instructions for installing firmware on systems that are not managed by an HMC can be found at:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7ha5/fix_serv_firm_kick.htm


IBM i Systems:
See "IBM Server Firmware and HMC Code Wizards":
http://www-912.ibm.com/s_dir/slkbase.NSF/DocNumber/408316083

NOTE: For all systems running with the IBM i Operating System, the following IBM i PTFs must be applied to all IBM i partitions prior to installing AM780_059:
These PTFs can be ordered through Fix Central.

7.0 Firmware History

The complete Firmware Fix History for this Release Level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/AM-Firmware-Hist.html