Power10 System Firmware

Applies to:   9043-MRX

This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware.


Contents


1.0 Systems Affected

This package provides firmware for IBM Power System E1050 (9043-MRX) server only.

The firmware level in this package is:

1.1 Minimum HMC Code Level

This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed.

The Minimum HMC Code levels for this firmware for HMC x86,  ppc64 or ppc64le are listed below.

NOTE: The HMC must be at a prerequisite level of HMC 1020.02 (September Monthly PTF) or 1021 (HMC 1020 SP1) before installing FW1020.10 or later service packs.  This level will fix the HMC so that it will show any deferred defects in the service pack being installed.

x86 -  This term is used to reference the legacy HMC that runs on x86/Intel/AMD hardware for the Virtual HMC that can run on the Intel hypervisors (KVM, XEN, VMWare ESXi).
ppc64 or ppc64le - describes the Linux code that is compiled to run on Power-based servers or LPARS (Logical Partitions)
The Minimum HMC level supports the following HMC models:
HMC models: 7063-CR1 and 7063-CR2
x86 - KVM, XEN, VMWare ESXi (6.0/6.5)
ppc64le - vHMC on PowerVM (POWER8,POWER9, and POWER10 systems)

For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central:
https://www.ibm.com/support/fixcentral/

For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT):
https://esupport.ibm.com/customercare/flrt/home


NOTES:

                -You must be logged in as hscroot in order for the firmware installation to complete correctly.
                - Systems Director Management Console (SDMC) does not support this System Firmware level.

2.0 Important Information

NovaLink levels earlier than "NovaLink 1.0.0.16 Feb 2020 release" with partitions running certain SR-IOV capable adapters is NOT supported at this firmware release

NovaLink levels earlier than "NovaLink 1.0.0.16 Feb 2020 release" do not support IO adapter FCs EC2R/EC2S, EC2T/EC2U, EC66/EC67 with FW1010 and later. 

2.2 Concurrent Firmware Updates

Concurrent system firmware update is supported on HMC Managed Systems only.

Ensure that there are no RMC connections issues for any system partitions prior to applying the firmware update.  If there is a RMC connection failure to a partition during the firmware update, the RMC connection will need to be restored and additional recovery actions for that partition will be required to complete partition firmware updates.

2.3 Memory Considerations for Firmware Upgrades

Firmware Release Level upgrades and Service Pack updates may consume additional system memory.
Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors.
Factors influencing server firmware memory requirements include the following:
Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations.

Additional information can be found at:
https://www.ibm.com/docs/en/power10/9043-MRX?topic=resources-memory

2.4 SBE Updates

Power10 servers contain SBEs (Self Boot Engines) and are used to boot the system.  SBE is internal to each of the Power10 chips and used to "self boot" the chip.  The SBE image is persistent and is only reloaded if there is a system firmware update that contains a SBE change.  If there is a SBE change and system firmware update is concurrent, then the SBE update is delayed to the next IPL of the CEC which will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  If there is a SBE change and the system firmware update is disruptive, then SBE update will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  During the SBE update process, the HMC or op-panel will display service processor code C1C3C213 for each of the SBEs being updated.  This is a normal progress code and system boot should be not be terminated by the user. Additional time estimate can be between 12-20 minutes per drawer or up to 48-80 minutes for maximum configuration.

The SBE image is updated with this service pack.


3.0 Firmware Information

Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.

For systems that are not managed by an HMC, the installation of system firmware is always disruptive.

Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed. Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document. For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred.

Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released.

System firmware file naming convention:

01VHxxx_yyy_zzz

NOTE: Values of service pack and last disruptive service pack level (yyy and zzz) are only unique within a release level (xxx). For example, 01MM1010_040_040 and 01MM1010_040_040 are different service packs.

An installation is disruptive if:

            Example: Currently installed release is 01VH900_040_040, new release is 01VH910_050_050.

            Example: VH910_040_040 is disruptive, no matter what level of VH910 is currently installed on the system.

            Example: Currently installed service pack is VH910_040_040 and new service pack is VH910_050_045.

An installation is concurrent if:

The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or higher than the last disruptive service pack level (zzz) of the service pack to be installed.

Example: Currently installed service pack is VH910_040_040, new service pack is VH910_041_040.

3.1 Firmware Information and Description

 
Filename Size Checksum md5sum
01MM1020_085_079.img 271637040
16709
e7080afbe13b629814ea31214ed79741

Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed).
ie: sum     01MM1020_085_079.img

MM1020
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
https://www.ibm.com/support/pages/node/6555136

The complete Firmware Fix History for this Release Level can be reviewed at the following url:
https://public.dhe.ibm.com/software/server/firmware/MM-Firmware-Hist.html
MM1020_085_079 / FW1020.10

09/23/22
Impact: Availability    Severity:  SPE

New Features and Functions
  • Support was added to the eBMC ASMI "Resource management -> System parameters->Aggressive prefetch" for Prefetch settings to enable or disable an alternate configuration of the processor core/nest to favor more aggressive prefetching behavior for the cache.  "Aggressive prefetch" is disabled by default and a change to enable it must be done at service processor standby.  The default behavior of the system ("Aggressive prefetch" disabled) will not change in any way with this new feature.  The customer will need to power off and enable "Aggressive prefetch" to get the new behavior.  Only change the "Aggressive prefetch" value if instructed by support or if recommended by a solution vendor as it might cause degraded system performance.
  • DEFERRED:  Support was added to the eBMC ASMI "Resource management->System parameters" for an option to set a Frequency cap.  When enabled, the cap prevents all processors in the system from exceeding the specified maximum operating frequency (given in MHz).
  • Support was adding for parsing On-Chip Controller (OCC) BC8A2Axx SRC information for the eBMC ASMI Event logs.
  • Support was added to the eBMC ASMI for a search option for the assemblies section on the inventory page.
System firmware changes that affect all systems
  • DEFERRED: A problem was fixed to clear the "deconfigured by error ID" property for a re-enabled Field Core Override (FCO) core that is fully functional and being used by the system.  This can happen If the system boots to runtime with FCO enabled such that 1 or more cores were disabled to achieve the FCO cap, and then one of those enabled cores is guarded at runtime, then on a subsequent memory preserving IPL ( MPIPL), a different core (disabled on previous boot), may be brought back online to hit the FCO number. But it will have the "deconfigured by error ID" property set to indicate it is still deconfigured by FCO.
  • DEFERRED: A problem was fixed for the eBMC ASMI "PCIe Hardware topology" information not being updated when a PCIe expansion drawer firmware update occurs or a type/model/serial number change is done.  The location codes for the PCIe expansion drawer FRUs and/or PCIe expansion drawer firmware version may not be correct.  The problem occurs when a PCIe expansion drawer change is done more than once to a given drawer but only the first change is shown.
  • DEFERRED: A problem was fixed for a PCIe switch being recovered instead of a port for a port error.  Since the switch is getting recovered instead of the port, all the other adapters under the switch are reset for the recovery action (and have a functional loss for a brief moment), instead of the lone adapter associated with the port.  Any downstream port level errors under the switch can trigger switch reset instead of port level reset.  After switch recovery, all the adapters under the switch will be operational.
  • A problem was fixed for a cable card port identify indicator that will not correctly display or modify from an OS following a concurrent cable card repair operation.  As a workaround, the cable card port identify can be done from the HMC or the eBMC ASMI. 
  • A problem was fixed for a concurrent exchange of a PCIe expansion drawer Midplane with PCIe expansion drawer slots owned by an active partition that fails at the Set Service Lock step.  This fails every time the concurrent exchange is attempted.
  • A problem was fixed for a rare system hang that can happen any time Dynamic Platform Optimization (DPO), memory guard recovery, or memory mirroring defragmentation occurs for a dedicated processor partition running in Power9 or Power10 processor compatibility mode. This does not affect partitions in Power9_base or older processor compatibility modes. If the partition has the "Processor Sharing" setting set to "Always Allow" or "Allow when partition is active", it may be more likely to encounter this than if the setting is set to "Never allow" or "Allow when partition is inactive".
    This problem can be avoided by not using DPO or using Power9_base processor compatibility mode for dedicated processor partitions. This can also be avoided by changing all dedicated processor partitions to use shared processors.
  • A problem was fixed for a partition with VPMEM failing to activate after a system IPL with SRC B2001230 logged for a "HypervisorDisallowsIPL" condition.  This problem is very rare and is triggered by the partition's hardware page table (HPT) being too big to fit into a contiguous space in memory.  As a workaround, the problem can be averted by reducing the memory needed for the HPT.  For example, if the system memory is mirrored, the HPT size is doubled, so turning off mirroring is one option to save space.  Or the size of the VPMEM LUN could be reduced.  The goal of these options would be to free up enough contiguous blocks of memory to fit the partition's HPT size.
  • A problem was fixed for an SR-IOV adapter in shared mode failing on an IPL with SRC B2006002 logged.  This is an infrequent error caused by a different SR-IOV adapter than expected being associated with the slot because of the same memory buffer being used by two SR-IOV adapters.  The failed SR-IOV adapter can be powered on again and it should boot correctly.
  • A problem was fixed for a processor core being incorrectly predictively deconfigured with SRC BC13E504 logged.  This is an infrequent error triggered by a cache line delete fail for the core with error log "Signature": "EQ_L2_FIR[0]: L2 Cache Read CE, Line Delete Failed".
  • A problem was fixed for the hypervisor to detect when it was missing Platform Descriptor Records (PDRs) from Hostboot and to log an SRC A7001159 for this condition.  The PDRs can be missing if the eBMC Platform Level Data Model (PLDM) failed and restarted during the IPL prior to the exchange of the PDRs with the Hypervisor.
    With the PDRs missing from the Hypervisor, the user would be unable to manage FRUs (such as LED control and slot concurrent maintenance).  A power off and power on of the system would recover from the problem.
  • A problem was fixed for register MMCRA bit 63 (Random Sampling Enable) being lost after a partition thread going into a power save state, causing performance tools that use the performance monitor facility to possibly collect incorrect data for an idle partition.
  • A problem was fixed for the SMS menu option "I/O Device Information".  When using a partition's SMS menu option "I/O Device Information" to list devices under a physical or virtual Fibre Channel adapter, the list may be missing or entries in the list may be confusing. If the list does not display, the following message is displayed:
    "No SAN adapters present.  Press any key to continue".
    An example of a confusing entry in a list follows:
    "Pathname: /vdevice/vfc-client@30000004
    WorldWidePortName: 0123456789012345
     1.  500173805d0c0110,0                 Unrecognized device type: c"
  • A problem was fixed for booting an OS using iSCSI from SMS menus that fails with a BA010013 information log.  This failure is intermittent and infrequent.  If the contents of the BA010013 are inspected, the following messages can be seen embedded within the log:
    " iscsi_read: getISCSIpacket returned ERROR"
    " updateSN: Old iSCSI Reply - target_tag, exp_tag"
  • A problem was fixed for an adapter port link not coming up after the port connection speed was set to "auto".  This can happen if the speed had been changed to a supported but invalid value for the adapter hardware prior to changing the speed to "auto".  A workaround to this problem is to disable and enable the switch port.
  • A problem was fixed for possible incorrect system fan speeds that can occur when an NVMe drive is pulled when the system is running.  This can occur if the pulled device is hot (over 58 C in temperature) or has a broken temperature sensor connection.  For these cases, the system fan control will either leave the fans running at high speed or keep increasing fans to the maximum speed.  If this problem occurs, it can be corrected by a reboot of the eBMC service processor.
  • A problem was fixed to remove an unneeded message "Power restore policy can not be changed while in manual operating mode" that occurs when viewing the eBMC ASMI "Power Restore Policy" in normal mode.  This message should only be shown when in manual operating mode.
  • A problem was fixed for timestamps for eBMC sensor values showing the wrong time and day when viewed by telemetry reports such as Redfish "MetricReport".  The timestamp can be converted to actual time and day by adding an epoch offset of 1970-1-1 to the timestamp value.
  • A problem was fixed for an empty NVMe slot reporting as an "Unrecognized FRU" but functional on the OS.
  • A problem was fixed for the eBMC ASMI PCIe Topology page showing the width of empty slots as "-1".  With the fix, the width of an empty slot displays as "unknown".
  • A problem was fixed for a false error message "Error resetting link" from the eBMC ASMI PCIe Topology page when setting an Identify LED for a PCIe slot.  The LED functions correctly for the operation but an error message is observed.
  • A problem was fixed for the eBMC ASMI "Operations->Host console" to show the correct connection status.  The status was not being updated as needed so it could show "Disconnected" even though the connection was active.
  • A problem was fixed on the eBMC ASMI "Operations->Firmware" page to prevent an early task completed message when switching running and backup images.  The early completion message does not cause an error in switching the firmware levels.
  • A problem was fixed on the eBMC ASMI "Resource management -> Memory -> System memory page setup" to prevent an invalid large value from being specified for "Requested huge page memory".  Without the fix, the out of range value higher than the maximum is accepted which can cause errors when allocating the memory for the partitions.
  • A problem was fixed on the eBMC ASMI Overview page to show the correct status of disabled for a Service Account that has been disabled. The User Management page, however, shows the correct status for Service Account and it is disabled in the eBMC.  This happens every time a Service Account is disabled.
  • A problem was fixed on the eBMC ASMI Overview page for the Server information "Asset tag" to show the correct updated "Asset tag" value after doing an edit of the tag and then a refresh of the page.  Without the fix, the old value is shown even though the change was successful.
  • A problem was fixed on the eBMC ASMI Overview->Firmware page where the Update firmware "Manage access keys" link is incorrectly disabled when the system is powered on.  This prevents the user from accessing the Capacity on demand (COD) page.  This traversal path works if the system is powered off.  The Firmware page is reached from the Overview page by going to the Firmware information frame and clicking on "View More".  Alternatively, the COD page can be reached using the side navigation bar with the "Resource management ->Capacity on demand" link as this works for the case where the system is powered on.
  • A problem was fixed for the eBMC ASMI "Settings->Power restore policy" to make it default to "Last state".  The current default is "Always off".  If power is lost to the system, it can be manually powered back on.  Or the user can configure the Power restore policy" to the desired value.
  • A problem was fixed for the eBMC ASMI Deconfiguration records not having the associated event log ID (PEL ID) that caused the deconfiguration of the hardware.  This occurs anything hardware is deconfigured and an ASMI Deconfiguration record is created.
  • A problem was fixed for the eBMC ASMI PCIe Topology page not having the NVME adapter/slot listed correctly.  As a workaround, the PCIe Topology information can be read from the HMC PCIe Topology view to get the NVME adapter/slot.
  • A problem was fixed for a short loss or dip in input power to a power supply causing SRC 110015F1 to be logged with message "The power supply detected a fault condition, see AdditionalData for further details."  The running system is not affected by this error.  This Unrecoverable Error (UE) SRC should not be logged for a very short power outage.   Ignore the error log if all power supplies have recovered.
  • A problem was fixed for an 110000AC SRC being logged for a false brownout condition after a faulted power supply is removed.  This problem occurs if the eBMC incorrectly categorizes the number of power supplies present, missing, and faulted to determine whether a brownout has occurred.  The System Attention LED may be lit if this problem occurs and it can be turned off using the HMC.
  • A problem was fixed for an eBMC dump being generated during a side switch IPL.  The side switch IPL is successful and no error log is reported.  This occurs on every side switch IPL.  For this situation, the eBMC dump can be ignored.
  • A problem was fixed for the eBMC falsely detecting an incorrect number of On-Chip Controllers (OCCs) during an IPL with SRC BD8D2681 logged.  This is a random and infrequent error on an IPL that recovers automatically with no impact to the system.
  • A problem was fixed for eBMC ASMI Hardware deconfiguration records for DIMM and Core hardware being incorrectly displayed after a Factory reset "Reset server settings only".  The deconfiguration records existing prior to this type of Factory reset will be displayed in ASMI after the factory reset but they are actually cleared in the system. A full factory reset using Factory reset "Reset BMC and server settings" does clear any existing deconfiguration records from ASMI.
  • A problem was fixed for eBMC ASMI failing to set a static IP address when switching from DHCP to static IP in the eBMC network configuration.  This occurs if the static IP selected is the same as the one that was used by DHCP.  This problem can be averted by disabling DHCP prior to assigning the static IP address.
  • A problem was fixed for the eBMC ASMI "Settings->Power restore policy" of  "Last state" where the system failed to power back on after an AC outage.  This can happen if the last IPL to the host run time state was a reboot by hostboot firmware for an SBE update, or if the last IPL was a warm reboot.
  • A problem was fixed for the eBMC ASMI Real time indicators for special characters being displayed that should have been suppressed.  This problem is intermittent but fairly frequent.  The special characters can be ignored.
  • A problem was fixed for the eBMC ASMI "Operations->System power operations-> Server power policy" of Automatic to correct the text describing this feature.  It was changed from "System automatic power off" to " With this setting, when the system is not partitioned, the behavior is the same as 'Power off', and when the system is partitioned, the behavior of the system is the same as 'Stay on'".
  • A problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" PCIe link type field which had some PCIe adapter slots showing as primary when they should be secondary.  The PCIe adapter switch slots are secondary buses, so these should be displayed as "Secondary" on the Link properties type.
  • A performance problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" page to reduce the amount of time the page takes to load.  The fix reduces internal calls by half for the loading process for each PCIe adapter in the system, so the improvement time is more for the larger systems.
  • A problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" page for missing information for the NVMe drive associated with an NVMe slot.  The drive in the slot is required to populate attributes like link speed, but these are empty when the problem causes the drive to not be found.  This is an ASMI display problem only for the PCIe topology screen as the NVMe drive is functional in the system.
  • A problem was fixed for a request to generate a resource dump that has missing parameters causing an eBMC bmcweb core dump.
  • A problem was fixed for extra logging of SRC BD56100A if the LCD panel is unplugged during an IPL.  The LCD support install and remove while the system is running, so any SRCs logged for this should be minimal, but there were many when this was done during the IPL.
  • A problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" page not updating the link status to "Unknown" or "Failed" when it has failed for a PCIe adapter.  The link continues to show as operational.  The HMC PCIe Topology view can be used to show the correct status of the link.
  • A problem was fixed for an eBMC SRC BD602803 not referencing a temperature issue as a cause for the SRC.  There is a missing message and callout for an over temperature fault.  With the fix, the OVERTMP symbolic FRU is called out for the parent FRU of the temperature sensor.
  • A problem was fixed for an eBMC dump created on a hot plug or unplug of an NVMe drive.  The dump should not be created for this situation and can be ignored or deleted.
  • A problem was fixed for the eBMC ASMI "Deconfiguration records" page option to "download additional data" that creates a file in a non-human readable format.  A workaround for the problem would be to go to the eBMC ASMI "Event logs" page using the SRC code that caused the hardware to be deconfigured and then download the event log details from there.
  • A problem was fixed for recovery from USB firmware update failures.  A failure in the USB update was causing an incomplete second try where an eBMC reboot was needed to ensure the code update retry worked properly.
System firmware changes that affect certain systems
  • DEFERRED: On systems with AIX or Linux partitions, a problem was fixed for certain I/O slots that have an incorrect description in the output from the lspci and lsslot commands in AIX and Linux operating systems.  This occurs anytime one of the affected slots is assigned to an AIX or Linux partition.  
    The following slots are affected:
      "Combination slots" (those that are PCI gen 4 x16 connector with x16 lanes connected OR PCI gen5 x16 connector with 8 lanes connected).
     P0-C2
     P0-C3
     P0-C4
     P0-C5
     P0-C8
     P0-C11
    NVME drive slots.  This affects the following slots:
    P1-C0
    P1-C1
    P1-C2
    P1-C3
    P1-C4
    P1-C5
    P1-C6
    P1-C7
    P1-C8
    P1-C9
  • For HMC managed systems, a problem was fixed for read-only fields on the eBMC ASMI Memory Resource Management page (Logical Memory block size, System Memory size, I/O adapter enlarged capacity, and Active Memory Mirroring) being editable in the gui when the system is powered off.  Any changes made in this manner would not be synchronized to the HMC(so the system would still use the HMC settings). To correct this problem, the Memory page settings should be changed on the HMC.
  • For a two socket processor configuration, a problem was fixed for a power voltage fault with SRC 11002620 having the wrong location codes referencing nonexistent hardware.
  • For a system that is managed by an HMC, a problem was fixed for the eBMC ASMI "Operations->Server power operations" page showing AIX/Linux partition boot mode and IBM i partition boot options which are not applicable to a HMC managed system.

4.0 How to Determine The Currently Installed Firmware Level

You can view the server's current firmware level on the Advanced System Management Interface (ASMI) Overview page under the System Information section in the Firmware Information panel. Example: (MM1020_079)


5.0 Downloading the Firmware Package

Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages.

Note: If your HMC is not internet-connected you will need to download the new firmware level to a USB flash memory device or ftp server.


6.0 Installing the Firmware

The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.

Example: MHxxx_yyy_zzz

Where xxx = release level

Instructions for installing firmware updates and upgrades can be found at https://www.ibm.com/docs/en/power10/9043-MRX?topic=9043-MRX/p10eh6/p10eh6_updates_sys.htm

IBM i Systems:

For information concerning IBM i Systems, go to the following URL to access Fix Central: 
https://www.ibm.com/support/fixcentral/

Choose "Select product", under Product Group specify "System i", under Product specify "IBM i", then Continue and specify the desired firmware PTF accordingly.

HMC and NovaLink Co-Managed Systems (Disruptive firmware updates only):

A co-managed system is managed by HMC and NovaLink, with one of the interfaces in the co-management master mode.
Instructions for installing firmware updates and upgrades on systems co-managed by an HMC and Novalink is the same as above for a HMC managed systems since the firmware update must be done by the HMC in the co-management master mode.  Before the firmware update is attempted, one must be sure that HMC is set in the master mode using the steps at the following IBM KnowledgeCenter link for NovaLink co-managed systems:
https://www.ibm.com/docs/en/power10/9043-MRX?topic=environment-powervm-novalink

Then the firmware updates can proceed with the same steps as for the HMC managed systems except the system must be powered off because only a disruptive update is allowed.   If a concurrent update is attempted, the following error will occur: " HSCF0180E Operation failed for <system name> (<system mtms>).  The operation failed.  E302F861 is the error code:"
https://www.ibm.com/docs/en/power10/9043-MRX?topic=9043-MRX/p10eh6/p10eh6_updates_sys.htm

7.0 Firmware History

The complete Firmware Fix History (including HIPER descriptions)  for this Release level can be reviewed at the following url:
https://public.dhe.ibm.com/software/server/firmware/MM-Firmware-Hist.html