01VM920_112_101.html Power9 System Firmware Applies to:   9040-MR9 This document provides information about the installation of Licensed Machine or Licensed Internal Code, which is sometimes referred to generically as microcode or firmware. ---------------------------------------------------------------------------------- Contents * 1.0 Systems Affected * 1.1 Minimum HMC Code Level * 2.0 Important Information * 2.1 IPv6 Support and Limitations * 2.2 Concurrent Firmware Updates * 2.3 Memory Considerations for Firmware Upgrades * 2.4 NIM install issue using SR-IOV shared mode at FW 920.20, 920.21, and 920.22 * 2.5 SBE Updates * 3.0 Firmware Information * 3.1 Firmware Information and Description Table * 4.0 How to Determine Currently Installed Firmware Level * 5.0 Downloading the Firmware Package * 6.0 Installing the Firmware * 7.0 Firmware History * 8.0 Change History   Revised (09/19/2019) ---------------------------------------------------------------------------------- 1.0 Systems Affected This package provides firmware for Power Systems E950 (9040-MR9) servers only. The firmware level in this package is: * VM920_112 / FW920.40 ---------------------------------------------------------------------------------- 1.1 Minimum HMC Code Level This section is intended to describe the "Minimum HMC Code Level" required by the System Firmware to complete the firmware installation process. When installing the System Firmware, the HMC level must be equal to or higher than the "Minimum HMC Code Level" before starting the system firmware update.  If the HMC managing the server targeted for the System Firmware update is running a code level lower than the "Minimum HMC Code Level" the firmware update will not proceed. The Minimum HMC Code levels for this firmware for HMC x86,  ppc64 or ppc64le are listed below. x86 -  This term is used to reference the legacy HMC that runs on x86/Intel/AMD hardware for both the 7042 Machine Type appliances and the Virtual HMC that can run on the Intel hypervisors (KVM, VMWare, Xen). * The Minimum HMC Code level for this firmware is:  HMC V9R1M921 (PTF MH01789) . * Although the Minimum HMC Code level for this firmware is listed above, V9R2, HMC V9R2M951.2 (PTF MH01892) or higher is recommended to avoid an issue that can cause the HMC to lose connections to all servers for a brief time with service events E2FF1409 and E23D040A being reported. This will cause all running server tasks such as server firmware upgrade to fail. ppc64 or ppc64le - describes the Linux code that is compiled to run on Power-based servers or LPARS (Logical Partitions) * The Minimum HMC Code level for this firmware is:  HMC V9R1M921 (PTF MH01790) . * Although the Minimum HMC Code level for this firmware is listed above, V9R2, HMC V9R2M951.2 (PTF MH01892) or higher is recommended to avoid an issue that can cause the HMC to lose connections to all servers for a brief time with service events E2FF1409 and E23D040A being reported. This will cause all running server tasks such as server firmware upgrade to fail. For information concerning HMC releases and the latest PTFs,  go to the following URL to access Fix Central: http://www-933.ibm.com/support/fixcentral/ For specific fix level information on key components of IBM Power Systems running the AIX, IBM i and Linux operating systems, we suggest using the Fix Level Recommendation Tool (FLRT): http://www14.software.ibm.com/webapp/set2/flrt/home NOTES:                 -You must be logged in as hscroot in order for the firmware installation to complete correctly.                 - Systems Director Management Console (SDMC) does not support this System Firmware level 2.0 Important Information Downgrading firmware from any given release level to an earlier release level is not recommended. If you feel that it is necessary to downgrade the firmware on your system to an earlier release level, please contact your next level of support. 2.1 IPv6 Support and Limitations IPv6 (Internet Protocol version 6) is supported in the System Management Services (SMS) in this level of system firmware. There are several limitations that should be considered. When configuring a network interface card (NIC) for remote IPL, only the most recently configured protocol (IPv4 or IPv6) is retained. For example, if the network interface card was previously configured with IPv4 information and is now being configured with IPv6 information, the IPv4 configuration information is discarded. A single network interface card may only be chosen once for the boot device list. In other words, the interface cannot be configured for the IPv6 protocol and for the IPv4 protocol at the same time. 2.2 Concurrent Firmware Updates Concurrent system firmware update is supported on HMC Managed Systems only. Ensure that there are no RMC connections issues for any system partitions prior to applying the firmware update.  If there is a RMC connection failure to a partition during the firmware update, the RMC connection will need to be restored and additional recovery actions for that partition will be required to complete partition firmware updates. 2.3 Memory Considerations for Firmware Upgrades Firmware Release Level upgrades and Service Pack updates may consume additional system memory. Server firmware requires memory to support the logical partitions on the server. The amount of memory required by the server firmware varies according to several factors. Factors influencing server firmware memory requirements include the following: *     Number of logical partitions *     Partition environments of the logical partitions *     Number of physical and virtual I/O devices used by the logical partitions *     Maximum memory values given to the logical partitions Generally, you can estimate the amount of memory required by server firmware to be approximately 8% of the system installed memory. The actual amount required will generally be less than 8%. However, there are some server models that require an absolute minimum amount of memory for server firmware, regardless of the previously mentioned considerations. Additional information can be found at: https://www.ibm.com/support/knowledgecenter/9040-MR9/p9hat/p9hat_lparmemory.htm 2.4 NIM install issue using SR-IOV shared mode at FW 920.20, 920.21, and 920.22 A defect in the adapter firmware for the following Feature Codes:  EN15,  EN17, EN0H, and EN0K was included in IBM Power Server Firmware levels 920.20, 920.21, and 920.22. This defect causes attempts to perform NIM installs using a Virtual Function (VF) to hang or fail.  Circumvention options for this problem can be found at the following link: http://www.ibm.com/support/docview.wss?uid=ibm10794153 2.5 SBE Updates Power 9 servers contain SBEs (Self Boot Engines) and are used to boot the system.  SBE is internal to each of the Power 9 chips and used to "self boot" the chip.  The SBE image is persistent and is only reloaded if there is a system firmware update that contains a SBE change.  If there is a SBE change and system firmware update is concurrent, then the SBE update is delayed to the next IPL of the CEC which will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  If there is a SBE change and the system firmware update is disruptive, then SBE update will cause an additional 3-5 minutes per processor chip in the system to be added on to the IPL.  During the SBE update process, the HMC or op-panel will display service processor code C1C3C213 for each of the SBEs being updated.  This is a normal progress code and system boot should be not be terminated by the user.  Additional time estimate can be between 12-20 minutes. ---------------------------------------------------------------------------------- 3.0 Firmware Information Use the following examples as a reference to determine whether your installation will be concurrent or disruptive.For systems that are not managed by an HMC, the installation of system firmware is always disruptive. Note: The concurrent levels of system firmware may, on occasion, contain fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can be installed concurrently, but will not be activated until the next IPL. Partition-Deferred fixes can be installed concurrently, but will not be activated until a partition reactivate is performed. Deferred and/or Partition-Deferred fixes, if any, will be identified in the "Firmware Update Descriptions" table of this document.For these types of fixes (Deferred and/or Partition-Deferred) within a service pack, only the fixes in the service pack which cannot be concurrently activated are deferred. Note: The file names and service pack levels used in the following examples are for clarification only, and are not necessarily levels that have been, or will be released. System firmware file naming convention: 01VMxxx_yyy_zzz * xxx is the release level * yyy is the service pack level * zzz is the last disruptive service pack level NOTE: Values of service pack and last disruptive service pack level (yyy and zzz) are only unique within a release level (xxx). For example, 01VM900_040_040 and 01VM910_040_045 are different service packs. An installation is disruptive if: * The release levels (xxx) are different.                  Example: Currently installed release is 01VM900_040_040, new release is 01VM910_050_050. * The service pack level (yyy) and the last disruptive service pack level (zzz) are the same.                  Example: VM910_040_040 is disruptive, no matter what level of VM910 is currently installed on the system. * The service pack level (yyy) currently installed on the system is lower than the last disruptive service pack level (zzz) of the service pack to be installed.             Example: Currently installed service pack is VM910_040_040 and new service pack is VM910_050_045. An installation is concurrent if: The release level (xxx) is the same, and The service pack level (yyy) currently installed on the system is the same or higher than the last disruptive service pack level (zzz) of the service pack to be installed. Example: Currently installed service pack is VM910_040_040, new service pack is VM910_041_040. 3.1 Firmware Information and Description   Filename Size Checksum md5sum 01VM920_112_101.rpm 118927430 48687 133986d82f75677df92aa1dbe83ccc63 Note: The Checksum can be found by running the AIX sum command against the rpm file (only the first 5 digits are listed). ie: sum 01VM920_112_101.rpm VM920 For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url: http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs The complete Firmware Fix History for this Release Level can be reviewed at the following url: http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VM-Firmware-Hist.html VM920_112_101 / FW920.40 08/06/19 Impact: Data            Severity:  HIPER New features and functions * An option was added to the SMS Remote IPL (RIPL) menus to enable or disable the UDP checksum calculation for any device type.  Previously, this checksum option was only available for logical LAN devices but now it extended to all types.  The default is for the UDP checksum calculation to be done, but if this calculation causes errors for the device, it can be turned off with the new option. System firmware changes that affect all systems * HIPER/Pervasive:  A change was made to fix an intermittent processor anomaly that may result in issues such as operating system or hypervisor termination, application segmentation fault, hang, or undetected data corruption.  The only issues observed to date have been operating system or hypervisor terminations. * DEFERRED:PARTITION_DEFERRED:  A problem was fixed for repeated CPU DLPAR remove operations by Linux (Ubuntu, SUSE, or RHEL) OSes possibly resulting in a partition crash.  No specific SRCs or error logs are reported.   The problem can occur on any DLPAR CPU remove operation if running on Linux.  The occurrence is intermittent and rare.  The partition crash may result in one or more of the following console messages (in no particular order):  1) Bad kernel stack pointer addr1 at addr2  2) Oops: Bad kernel stack pointer  3) ******* RTAS CALL BUFFER CORRUPTION *******  4)  ERROR: Token not supported This fix does not activate until there is a reboot of the partition. * A problem was fixed for a concurrent firmware update failure with SRC B7000AFF logged.  This is a rare problem triggered by a power mode change preceding a concurrent firmware update.  To recover from this problem, run the code update again without any power mode changes. * A problem was fixed for an IPMI core dump and SRC B181720D logged, causing the service processor to reset due to a low memory condition.  The memory loss is triggered by frequently using the ipmitool to read the network configuration.  The service processor recovers from this error but if three of these errors occur within a 15 minute time span, the service processor will go to a failed hung state with SRC B1817212 logged.  Should a service processor hang occur, OS workloads will continue to run but it will not be possible for the HMC to interact with the partitions.  This service processor hung state can be recovered by doing a re-IPL of the system with a scheduled outage. * A problem was fixed for informational logs flooding the error log if a "Get Sensor Reading" is not working. * A problem was fixed for a concurrent firmware hang with SRC B1813450 logged.  This is a rare problem triggered by an error or power mode change that requires a Power Management (PM)  Complex Reset.  To recover from this problem, re-IPL the system and it will be running at the target firmware update level. * A problem was fixed for a concurrent replace of the base Operator Panel with the LCD Operator Panel with feature code #EU0B that could result in the following errors if the replace takes longer than four minutes: 1) SRCs B1504805 and B1504804 logged against the Operator Panel. 2) The ambient temperature sensors in the system will be considered faulted by firmware.  As a result, firmware will not automatically shut the system down due to high ambient temperature (EPOW).  The system can be recovered with a reset of the service processor or a power down/ power up of the system.  The system can also be recovered by removing the LCD Operator Panel for at least two minutes and then plugging it back in. * A problem was fixed for shared processor pools where uncapped shared processor partitions placed in a pool may not be able to consume all available processor cycles.  The problem may occur when the sum of the allocated processing units for the pool member partitions equals the maximum processing units of the pool. * A problem was fixed for an outage of I/O connected to a single PCIe Host Bridge (PHB) with a B7006970 SRC logged.  With the fix, the rare PHB fault will have an EEH event detected and recovered by firmware. * A problem was fixed for partitions becoming unresponsive or the HMC not being able to communicate with the system after a processor configuration change or a partition power on and off. * A problem was fixed for a concurrent firmware update error with SRC B7000AFF logged.  This is a rare problem triggered by an error or power mode change that requires a Power Management (PM) Complex Reset.  To recover from this problem, re-IPL the system and it will be running at the target firmware update level. * A problem was fixed for possible abnormal terminations of programs on partitions running in POWER7 or POWER8 compatibility mode. * A problem was fixed for a hypervisor hang that can occur on the target side when doing a Live Partition Mobility (LPM) migration from a system that does not support encryption and compression of LPM data.  If the hang occurs, the HMC will go to an "Incomplete" state for the target system.  The problem is rare because the data from the source partition must be in a very specific pattern to cause the fail.  When the failure occurs, a B182951C will be logged on the target (destination) system and the HMC for the source partition will issue the following message:  "HSCLA318 The migration command issued to the destination management console failed with the following error: HSCLA228 The requested operation cannot be performed because the managed system is not in the Standby or Operating state.".  To recover, the target system must be re-IPLed. * A problem was fixed for an initialization failure of an SR-IOV adapter port during its boot, causing a B400FF02 SRC to be logged.  This is a rare problem and it recovers automatically by the reboot of the adapter on the error. * A problem was fixed for SR-IOV adapter Virtual Functions (VFs) that can fail to restore to their configuration after a low-level EEH error, causing loss of function for the adapter.  This problem can occur if the other than the default NIC VF configuration was selected when the VF was created.  The problem will occur all the time for VFs configured as RDMA over Converged Ethernet (RoCE) but much less frequent and intermittent for other non-default VF configurations. * A problem was fixed which caused network traffic failures for Virtual Functions (VFs) operating in non-promiscuous multicast mode.  In non-promiscuous mode, when a VF receives a frame, it will drop it unless the frame is addressed to the VF's MAC address, or is a broadcast or multicast addressed frame.  With the problem, the VF drops the frame even though it is multicast, thereby blocking the network traffic, which can result in ping failures and impact other network operations.  To recover from the issue, turn multicast promiscuous on.  This may cause some unwanted multicast traffic to flow to the partition. * A problem was fixed for a boot failure using a N_PORT ID Virtualization (NPIV) LUN for an operating system that is installed on a disk of 2 TB or greater, and having a device driver for the disk that adheres to a non-zero allocation length requirement for the "READ CAPACITY 16".  The IBM partition firmware had always used an invalid zero allocation length for the return of data and that had been accepted by previous device drivers.  Now some of the newer device drivers are adhering to the specification and needing an allocation length of non-zero to allow the boot to proceed. * A problem was fixed for a possible boot failure from a ISO/IEC 13346 formatted image, also known as Universal Disk Format (UDF). UDF is a profile of the specification known as ISO/IEC 13346 and is an open vendor-neutral file system for computer data storage for a broad range of media such as DVDs and newer optical disc formats.  The failure is infrequent and depends on the image.  In rare cases, the boot code erroneously fails to find a file in the current directory.  If the boot fails on a specific image, the boot of that image will always fail without the fix. * A problem was fixed for broadcast bootp installs or boots that fail with a UDP checksum error. * A problem was fixed for failing to boot from an AIX mksysb backup on a USB RDX drive with SRCs logged of BA210012, AA06000D, and BA090010.  The boot error does not occur if a serial console is used to navigate the SMS menus. * A problem was fixed for possible loss of mainstore memory dump data for system termination errors. * A problem was fixed for an intermittent IPL failure with B181345A, B150BA22, BC131705,  BC8A1705, or BC81703 logged with a processor core called out.  This is a rare error and does not have a real hardware fault, so the processor core can be unguarded and used again on the next IPL. * A problem was fixed for two false UE SRCs of B1815285 and B1702A03 possibly being logged on the first IPL of a 2-node system.  A VPD timing error can cause a 2-node system to be misread as a 4-node, causing the false SRCs.  This can only occur on the first IPL of the system. * A problem was fixed for a processor core fault in the early stages of the IPL that causes the service processor to terminate.  With the fix, the system is reconfigured to remove the bad core and the system is IPLed with the remaining processor cores. * A problem was fixed for a drift in the system time (time lags and the clock runs slower than the true value of time) that occurs when the system is powered off to the service processor standby state.  To recover from this problem, the system time must be manually corrected using the Advanced System Management Interface (ASMI) before powering on the system.  The time lag increases in proportion to the duration of time that the system is powered off. * A problem was fixed for hypervisor tasks getting deadlocked that cause the hypervisor to be unresponsive to the HMC ( this shows as an incomplete state on the HMC) with SRC B200F011 logged.  This is a rare timing error.  With this problem,  OS workloads will continue to run but it will not be possible for the HMC to interact with the partitions.  This error can be recovered by doing a re-IPL of the system with a scheduled outage. * A problem was fixed for eight or more simultaneous Live Partition Mobility (LPM) migrations to the same system possibly failing in validation with the HMC error message of "HSCL0273 A command that was targeted to the managed system has timed out".  The problem can be circumvented by doing the LPM migrations to the same system in smaller batches. * A problem was fixed for a system IPLing with an invalid time set on the service processor that causes partitions to be reset to the Epoch date of 01/01/1970.  With the fix, on the IPL, the hypervisor logs a B700120x when the service processor real time clock is found to be invalid and halts the IPL to allow the time and date to be corrected by the user.  The Advanced System Management Interface (ASMI) can be used to correct the time and date on the service processor.  On the next IPL, if the time and date have not been corrected, the hypervisor will log a SRC B7001224 (indicating the user was warned on the last IPL) but allow the partitions to start, but the time and date will be set to the Epoch value. * A problem was fixed for the Advanced System Management Interface (ASMI) menu for "PCIe Hardware Topology/Reset link" showing the wrong value.  This value is always wrong without the fix. * A problem was fixed for SR-IOV adapters to provide a consistent Informational message level for cable plugging issues.  For transceivers not plugged on certain SR-IOV adapters, an unrecoverable error (UE) SRC B400FF03 was changed to an Informational message logged.  This affects the SR-IOV adapters with the following feature codes:  EC2S, EC2U, and EC3M. For copper cables unplugged on certain SR-IOV adapters, a missing message was replaced with an Informational message logged.  This affects the SR-IOV adapters with the following feature codes: EN17, EN0K, EN0L, EL3C,  and EL57. * A problem was fixed for a drift in the system time (time lags and the clock runs slower than the true value of time) that occurs when the system is powered off to the service processor standby state.  To recover from this problem, the system time must be manually corrected using the Advanced System Management Interface (ASMI) before powering on the system.  The time lag increases in proportion to the duration of time that the system is powered off. * A problem was fixed for incorrect Centaur DIMM callouts for DIMM over temperature errors.  The error log for the DIMM over temperature will have incorrect FRU callouts, either calling out the wrong DIMM or the wrong Centaur memory buffer.  System firmware changes that affect certain systems * On systems with PCIe3 expansion drawers(feature code #EMX0),  a problem was fixed for a concurrent exchange of a PCIe expansion drawer cable card, although successful, leaves the fault LED turned on.  * On systems using Utility COD, a problem was fixed for "Shared Processor Utilization Data" showing a too-large number of Non-Utility processors, much more than even installed.  This incorrect information can prevent the billing for the use of the Utility Processors. VM920_101_101 / FW920.30 03/08/19 Impact: Data            Severity:  HIPER New features and functions * Support was added to allow 3-socket processor configurations for the system. Previously, there had to be a minimum of two sockets and a maximum of 4 sockets but 3 socket configurations were not supported. * The Operations Panel was enhanced to display "Disruptive" warning for control panel operations that would disturb a running system.  For example, control panel function "03" is used to re-IPL the system and would get the warning message to alert the operator that the system could be impacted. * A new SRC of B7006A74 was added for PHB LEM 62 errors that had surpassed a threshold in the path of the #EMX0 expansion drawer.  This replaces the SRC B7006A72 to have a correct callout list.  Without the feature, when B7006A72 is logged against a PCIe slot in the CEC containing a cable card, the FRUs in the full #EMX0 expansion drawer path should be considered (use the B7006A8B FRU callout list as a reference). System firmware changes that affect all systems * HIPER/Pervasive: DISRUPTIVE:  A problem was fixed where, under certain conditions, a Power Management Reset (PM Reset) event may result in undetected data corruption.  PM Resets occur under various scenarios such as power management mode changes between Dynamic Performance and Maximum Performance, Concurrent FW updates, power management controller recovery procedures, or system boot. * DEFERRED:  A problem was fixed for I/O adapters that use LSI (Level Sensitive Interrupts) not functioning in slot C6.  This problem can be avoided by moving the adapter to a direct PCIe slot.  The system must be re-IPLed to activate this fix. * DEFERRED:  A problem with slower than expected L2 cache memory update response was fixed to improve system performance for some workloads.  The slowdown was triggered by many concurrent processor threads trying to update the L2 cache memory atomicallly with a Power LARX/STCX instruction sequence.  Without the fix, the rate that the system could do these atomic updates was slower than the normal L2 cache response which could cause the system overall performance to decrease.  This problem could be noticed for workloads that are cache bound (where speed of cache access is an important factor in determining the speed at which the program gets executed). For example, if the most visited part of a program is a small section of code inside a loop small enough to be contained within the cache, then the program may be cache bound. * A problem was fixed for not being able to concurrently add the PCIe to USB conversion card with CCIN 6B6C.  The Vital Product Data (VPD )for the new FRU is not updated into the system, so the added part is not functional until the system is re-IPLed. * A problem was fixed for a system mis-configured with a mix of DDR3 and DDR4 DIMMs in the same node failing without callouts for the problem DIMMs.  The system fails with SRC B181BAD4.  With fix, the IPL will still fail but the SRC provides a list of the problem DIMMs so they can be guarded or physically removed. * A problem was fixed for failed hardware such as a clock card causing the service processor to have slow performance.  This might be seen if a hardware problem occurs and the service processor appears to be hanging while error logs are collected. * A problem was fixed for an IPL failing with B7000103 if there is an error in a PCIe Hub (PHB).  With the fix, the IPL is allowed to complete but there may be failed I/O adapters if the errant PHB is populated with PCIe adapters. * A problem was fixed for hypervisor task getting deadlocked if partitions are powered on at the same time that SR-IOV is being configured for an adapter.  With this problem, workloads will continue to run but it will not be possible to change the virtualization configuration or power partitions on and off.  This error can be recovered by doing a re-IPL of the system. * A problem was fixed for I/O adapters not recovering from low-level EEH errors, resulting in a Permanent EEH error with SRC B7006971 logged.  These errors can occur during memory relocation in parallel with heavy I/O traffic,  The affected adapters can be recovered by a re-IPL of the system. * A problem was fixed for the an unexpected Core Watchdog error during a reset of the service processor with a SRC B150B901 logged .  With enough service processor resets in a row, it is possible for the service processor to go to a failed state with SRC B1817212 on systems with a single service processor.  On systems with redundant service processors, the failed service processor would get guarded with a B151E6D0 or B152E6D0 SRC depending on which service processor fails.  The hypervisor and the partition workloads would continue to run in these cases of failed service processors. * A problem was fixed for an intermittent IPL failure with BC131705 and BC8A1703 logged with a processor core called out.  This is a rare error and does not have a real hardware fault, so the processor core can be unguarded and used again on the next IPL. * A problem was fixed for DDR4 2933 MHZ and 3200 MHZ DIMMs not defaulting to the 2666 MHZ speed on a new DIMM plug, thus preventing the system from IPLing. * A problem was fixed for a PCIe Hub checkstop with SRC B138E504 logged that fails to guard the errant processor chip.  With the fix, the problem hardware FRU is guarded so there is not a recurrence of the error on the next IPL. * A problem was fixed for a VRM error for a Self Boot Engine (SBE) that caused the system to go to terminate state after the error rather than re-IPLing to run-time.  A re-IPL will recover the system. * A problem was fixed for an IPMI core dump and SRC B1818601 logged intermittently when an IPMI session is closed.  A flood of B1818A03 SRCs may be logged after the error occurs.  The IPMI server is not impacted and a call home is reported for the problem.  There is no service outage for the IPMI users because of this. * A problem was fixed for a boot device hang, leading to a long time-out condition before the service processor gives up.  This problem has a very low frequency and a re-IPL is normally successful to recover the system. * A problem was fixed for DIMM row repairs for 8Gb and 16Gb DIMMs to allow the ECC spare repair to be used.  This problem does not affect all the memory in the DIMM, just the memory in the first rank position. * A problem was fixed for deconfigured FRUs that showed as Unit Type of "Unknown" in the Advanced System Management Interface (ASMI).  The following FRU type names will be displayed if deconfigured (shown here is a description of the FRU type as well): DMI: Processor to Memory Buffer Interface MC: Memory Controller MFREFCLK: Multi Function Reference Clock MFREFCLKENDPT: Muti function reference clock end point MI: Processor to Memory Buffer Interface NPU:  Nvidia Processing Unit OBUS_BRICK: OBUS SYSREFCLKENDPT: System reference clock end point TPM: Trusted Platform Module * A problem was fixed for insufficient fan speeds for PCIe cards that require additional cooling.  To circumvent this problem for systems that have the high performance PCIe cards, disable Idle Power Saver mode and ensure system power mode is set to Nominal, Dynamic Performance, or Maximum Performance.  With the fix,  If an adapter is known to require higher levels of cooling, the system automatically speeds up fans to increase airflow across the PCIe adapters.  The affected adapters that need the additional cooling are the following PCIe SAS adapters with feature codes EJ0J, EJ0K, EJ0L, EJ10, EJ14, EJIN,  and EJIP. * A problem was fixed for shared processor partitions going unresponsive after changing the processor sharing mode of a dedicated processor partition from "allow when partition is active" to either "allow when partition is inactive" or "never".  This problem can be circumvented by avoiding disabling processor sharing when active on a dedicated processor partition.  To recovery the partition if the issue has been encountered, enable "processor sharing when active" for the partition. * A problem was fixed for hypervisor error logs issued during the IPL missing the firmware version.  This happens on every IPL for logs generated during the early part of the IPL. * A problem was fixed for a continuous logging of B7006A28 SRCs after the threshold limit of PCIe Advanced Error Reporting (AER) correctable errors.  The error log flooding can cause error buffer wrapping and other performance issues. * A problem was fixed for an error in deleting a partition with the virtualized Trusted Platform Module (vTPM) enabled and SRC B7000602 logged.  When this error occurs, the encryption process in the hypervisor may become unusable.  The problem can be recovered from with a re-IPL of the system. * A problem was fixed in Live Partition Mobility (LPM) of a partition to a shared processor pool, which results in the partition being unable to consume uncapped cycles on the target system.  To prevent the issue from occurring, partitions can be migrated to the default shared processor pool and then dynamically moved to the desired shared processor pool.  To recover from the issue, use DLPAR to add or remove a virtual processor to/from the affected partition, dynamically move the partition between shared processor pools, reboot the partition, or re-IPL the system. * A problem was fixed for informational (INF) errors for the PCIe Hub (PHB) at a threshold limit causing the I/O slots to go non-operational.   The system I/O can be recovered with a re-IPL. * A problem was fixed for the HMC in some instances reporting a VIOS partition as an AIX partition.  The VIOS partition can be used correctly even when it is misidentified. * A problem was fixed for errors in the PHB performance counters collected by the 24x7 performance monitor. * A problem was fixed for certain SR-IOV adapters where SRC B400FF01 errors are seen during configuration of the adapter into SR-IOV mode or updating adapter firmware. This fix updates the adapter firmware to 11.2.211.37  for the following Feature Codes: EN15,  EN17, EN0H, EN0J, EN0M, EN0N, EN0K, EN0L, EL38, EL3C, EL56, and EL57. The SR-IOV adapter firmware level update for the shared-mode adapters happens under user control to prevent unexpected temporary outages on the adapters.  A system reboot will update all SR-IOV shared-mode adapters with the new firmware level.  In addition, when an adapter is first set to SR-IOV shared mode, the adapter firmware is updated to the latest level available with the system firmware (and it is also updated automatically during maintenance operations, such as when the adapter is stopped or replaced).  And lastly, selective manual updates of the SR-IOV adapters can be performed using the Hardware Management Console (HMC).  To selectively update the adapter firmware, follow the steps given at the IBM Knowledge Center for using HMC to make the updates:   https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm . Note: Adapters that are capable of running in SR-IOV mode, but are currently running in dedicated mode and assigned to a partition, can be updated concurrently either by the OS that owns the adapter or the managing HMC (if OS is AIX or VIOS and RMC is running). * A problem was fixed for a system terminating if there was even one predictive or recoverable SRC.  For this problem, all hardware SRCs logged are treated as terminating SRCs.  For this behavior to occur, the initial service processor boot from the AC power off state failed to complete cleanly, instead triggering an internal reset (a rare error),  leaving some parts of the service processor not initialized.  This problem can be recovered by doing an AC power cycle, or concurrently on an active system with the assistance of IBM support. * A security problem was fixed in the service processor OpenSSL support that could cause secured sockets to hang, disrupting HMC communications for system management and partition operations.  The Common Vulnerabilities and Exposures issue number is CVE-2018-0732. * A security problem was fixed in the service processor Network Security Services (NSS) services which, with a man-in-the-middle attack, could provide false completion or errant network transactions or exposure of sensitive data from intercepted SSL connections to ASMI, Redfish, or the service processor message server.  The Common Vulnerabilities and Exposures issue number is CVE-2018-12384. * A problem was fixed for IPMI sessions in the service processor causing a flood of B181A803 informational error logs on registry read fails for IPv6 and IPv4 keywords.  These error logs do not represent a real problem and may be ignored. * A security problem was fixed in the service processor TCP stack that would allow a Denial of Service (DOS) attack with TCP packets modified to trigger time and calculation expensive calls.  By sending specially modified packets within ongoing TCP sessions with the Management Consoles,  this could lead to a CPU saturation and possible reset and termination of the service processor.   The Common Vulnerabilities and Exposures issue number is CVE-2018-5390. * A security problem was fixed in the service processor TCP stack that would allow a Denial of Service (DOS) attack by allowing very large IP fragments to trigger time and calculation expensive calls in packet reassembly.  This could lead to a CPU saturation and possible reset and termination of the service processor.   The Common Vulnerabilities and Exposures issue number is CVE-2018-5391.  With the fix, changes were made to lower the IP fragment thresholds to invalidate the attack. VM920_089_075 / FW920.24 02/12/19 Impact:  Performance      Severity:  SPE New Features and Functions * Support for up to 8 production SAP HANA LPARs and 16 TB of memory. System firmware changes that affect all systems * A problem was fixed for a concurrent firmware update that could hang during the firmware activation, resulting in the system entering into Power safe mode.  The system can be recovered by doing a re-IPL of the system with a power down and power up.  A concurrent remove of this fix to the firmware level FW920.22 will fail with the hang, so moving back to this level should only be done with a disruptive firmware update. * A problem was fixed where installing a partition with a NIM server may fail when using an SR-IOV adapter with a Port VLAN ID (PVID) configured.  This error is a regression problem introduced in the 11.2.211.32 adapter firmware.  This fix reverts the adapter firmware back to 11.2.211.29  for the following Feature Codes:  EN15,  EN17, EN0H, and EN0K.  Because the adapter firmware is reverted to the prior version, all changes included in the 11.2.211.32 are reverted as well.  Circumvention options for this problem can be found at the following link: http://www.ibm.com/support/docview.wss?uid=ibm10794153. The SR-IOV adapter firmware level update for the shared-mode adapters happens under user control to prevent unexpected temporary outages on the adapters.  A system reboot will update all SR-IOV shared-mode adapters with the new firmware level.  In addition, when an adapter is first set to SR-IOV shared mode, the adapter firmware is updated to the latest level available with the system firmware (and it is also updated automatically during maintenance operations, such as when the adapter is stopped or replaced).  And lastly, selective manual updates of the SR-IOV adapters can be performed using the Hardware Management Console (HMC).  To selectively update the adapter firmware, follow the steps given at the IBM Knowledge Center for using HMC to make the updates:   https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm . Note: Adapters that are capable of running in SR-IOV mode, but are currently running in dedicated mode and assigned to a partition, can be updated concurrently either by the OS that owns the adapter or the managing HMC (if OS is AIX or VIOS and RMC is running). VM920_080_075 / FW920.22 12/13/18 Impact:  Availability      Severity:  SPE System firmware changes that affect all systems * A problem was fixed for an intermittent IPL failure with SRCs B150BA40 and B181BA24 logged.  The system can be recovered by IPLing again.  The failure is caused by a memory buffer misalignment, so it represents a transient fault that should occur only rarely. * A problem was fixed for intermittent PCIe correctable errors which would eventually threshold and cause SRC B7006A72 to be logged. PCIe performance degradation or temporary loss of one or more PCIe IO slots could also occur resulting in SRCs B7006970 or B7006971. VM920_078_075 / FW920.21 11/28/18 Impact:  Availability      Severity:  SPE * This Service Pack contained updates for MANUFACTURING ONLY. VM920_075_075 / FW920.20 11/16/18 Impact:  Data                  Severity:  HIPER New features and functions * Support was enabled for eRepair spare lane deployment for fabric and memory buses. * Support was added for doing soft post package memory row repair (sPPR) on the DDR4 DIMMs during the system IPL. The sPPR feature saves on the use of ECC spares for memory recovery, reducing the number of DIMMs that have to be guarded for memory errors. * Support was added for Multi-Function clock card failover. System firmware changes that affect all systems * HIPER/Non-Pervasive: DISRUPTIVE:  Fixes included to address potential scenarios that could result in undetected data corruption, system hangs, or system terminations. * DISRUPTIVE:  A problem was fixed for PCIe and SAS adapters in slots attached to a PLX (PCIe switch) failing to initialize and not being found by the Operating System.  The problem should not occur on the first IPL after an AC power cycle, but subsequent IPLs may experience the problem. * DEFERRED:  A problem was fixed for a PCIe clock failure in the PCIe3 I/O expansion drawer (feature #EMX0), causing loss of PCIe slots.   The system must be re-IPLed for the fix to activate. * DEFERRED:  A problem was fixed for a possible system hang in the early boot stage.  This could occur during periods of very high activity for memory read operations which deplete all read buffers, hanging an internal process that requires a read buffer,  With the fix, a congested memory controller can stall the read pipeline to make a read buffer available for the internal processes. * DEFERRED:  A problem was fixed for concurrent maintenance operations for PCIe expansion drawer cable cards and PCI adapters that could cause loss of system hardware information in the hypervisor with these side effects:  1) partition secure boots could fail with SRC BA540100 logged.; 2) Live Partition Mobility (LPM) migrations could be blocked; 3) SR-IOV adapters could be blocked from going into shared mode; 4) Power Management services could be lost; and 5) warm re-IPLs of the system can fail.  The system can be recovered by powering off and then IPLing again. * DEFERRED:  A problem was fixed for a transient VRM over-current condition for loads on the USB bus that could fail an IPL with SRC 11002700 00002708 logged.  The frequency of the failure is about 1 in every 5 IPL attempts.  The system can be recovered by doing another IPL. * A problem was fixed for an unhelpful error message of "HSCL1473 Cannot execute atomic operation. Atomic operations are not enabled." that is displayed on the HMC if there are no licensed processors available for the boot of a partition. * A problem was fixed for a memory channel failure due to a RCD parity error calling out the affected DIMMs correctly, but also falsely calling out either the memory controller or a processor, or both.  * A problem was fixed for adapters in slots attached to a PLX (PCIe switch) failing with SRCs B7006970 and BA188002  when a second and subsequent errors on the PLX failed to initiate PLX recovery.  For this infrequent problem to occur, it requires a second error on the PLX after recovery from the first error. * A problem was fixed for the system going into Safe Mode after a run-time deconfiguration of a processor core,  resulting in slower performance.  For this problem to occur, there must be a second fault in the Power Management complex after the processor core has been deconfigured. * A problem was fixed for service processor resets confusing the wakeup state of processor cores, resulting in degraded cores that cannot be managed for power usage.  This will result in the system consuming more power, but also running slower due to the inability to make use of WOF optimizations around the cores.  The degraded processor cores can be recovered by a re-IPL of the system. * A problem was fixed for the On-Chip Controller (OCC) MAX memory bandwidth sensor sometimes having values that are too high. * A problem was fixed for DDR4 memory training in the IPL to improve the DDR4 write margin.  Lesser write margins can potentially cause memory errors. * A problem was fixed for a system failure with SRC B700F103 that can occur if a shared-mode SR-IOV adapter is moved from a high-performance slot to a lower performance slot.   This problem can be avoided by disabling shared mode on the SR-IOV adapter; moving the adapter;  and then re-enabling shared mode. * A problem was fixed for the system going to Safe Mode if all the cores of a processor are lost at run-time. * A problem was fixed for a Core Management Engine (CME) fault causing a system failure with SRC B700F105 if processor cores had been guarded during the IPL. * A problem was fixed for a Core Management Engine (CME) fault that could result in a system checkstop. * A problem was fixed for a missing error log for the case of the TPM card not being detected when it is required for a trusted boot. * A problem was fixed for a flood of BC130311 SRCs that could occur when changing Energy Scale Power settings, if the Power Management is in a reset loop because of errors. * A problem was fixed for coherent accelerator processor proxy (CAPP) unit errors being called out as CEC hardware Subsystem instead of PROCESSOR_UNIT. * A problem was fixed for an incorrect processor callout on a memory channel error that causes a CHIFIR[61] checkstop on the processor. * A problem was fixed for a Logical LAN (l-lan) device failing to boot when there is a UDP packet checksum error.  With the fix, there is a new option when configuring a l-lan port in SMS to enable or disable the UDP checksum validation.  If the adapter is already providing the checksum validation, then the l-lan port needs to have its validation disabled. * A problem was fixed for missing error logs for hardware faults if the hypervisor terminates before the faults can be processed.  With the fix, the hardware attentions for the bad FRUs will get handled, prior to processing the termination of the hypervisor. * A problem was fixed for the diagnostics for a system boot checkstop failing to isolate to the bad FRU if it occurred on a non-master processor or a memory chip connected to a non-master processor.  With the fix, the fault attentions from a non-master processor are properly isolated to the failing chip so it can be guarded or recovered as needed to allow the IPL to continue. * A problem was fixed for Hostboot error log IDs (EID) getting reused from one IPL to the next, resulting in error logs getting suppressed (missing)  for new problems on the subsequent IPLs if they have a re-used EID that was already present in the service processor error logs. * A problem was fixed for Live Partition Mobility (LPM) partition migration to preserve the Secure Boot setting on the target partition.  Secure Boot is supported in FW920 and later partitions.  If the Secure Boot setting is non-zero for the partition, it will zero after the migration. * A problem was fixed for an SR-IOV adapter using the wrong Port VLAN ID (PVID) for a logical port (VF) when its non-zero PVID could be changed following a network install using the logical port. This fix updates adapter firmware to 11.2.211.32  for the following Feature Codes: EN15,  EN17, EN0H, EN0J, EN0M, EN0N, EN0K, EN0L, EL38, EL3C, EL56, and EL57. The SR-IOV adapter firmware level update for the shared-mode adapters happens under user control to prevent unexpected temporary outages on the adapters.  A system reboot will update all SR-IOV shared-mode adapters with the new firmware level.  In addition, when an adapter is first set to SR-IOV shared mode, the adapter firmware is updated to the latest level available with the system firmware (and it is also updated automatically during maintenance operations, such as when the adapter is stopped or replaced).  And lastly, selective manual updates of the SR-IOV adapters can be performed using the Hardware Management Console (HMC).  To selectively update the adapter firmware, follow the steps given at the IBM Knowledge Center for using HMC to make the updates:   https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm . Note: Adapters that are capable of running in SR-IOV mode, but are currently running in dedicated mode and assigned to a partition, can be updated concurrently either by the OS that owns the adapter or the managing HMC (if OS is AIX or VIOS and RMC is running). * A problem was fixed for a SMS ping failure for a SR-IOV adapter VF with a non-zero Port VLAN ID (PVID).  This failure may occur after the partition with the adapter has been booted to AIX, and then rebooted back to SMS.  Without the fix, residue information from the AIX boot is retained for the VF that should have been cleared. * A problem was fixed for a SR-IOV adapter vNIC configuration error that did not provide a proper SRC to help resolve the issue of the boot device not pinging in SMS due to maximum transmission unit (MTU) size mismatch in the configuration.  The use of a vNIC backing device does not allow configuring VFs for jumbo frames when the Partition Firmware configuration for the adapter (as specified on the HMC) does not support jumbo frames.  When this happens, the vNIC adapter will fail to ping in SMS and thus cannot be used as a boot device.  With the fix,  the vNIC driver configuration code is now checking the vNIC login (open) return code so it can issue an SRC when the open fails for a MTU issue (such as jumbo frame mismatch) or for some other reason.  A jumbo frame is an Ethernet frame with a payload greater than the standard MTU of 1,500 bytes and can be as large as 9,000 bytes. * A problem was fixed for three bad lanes causing a memory channel fail on the DMI interface.  With the fix, the errors on the third lane on the DMI interface will be recovered and it will continue to be used as long as it functions. * A problem was fixed for preventing loss of function on an SR-IOV adapter with an 8MB adapter firmware image if it is placed into SR-IOV shared mode.  The 8MB image is not supported at the FW920.20 firmware level.  With the fix, the adapter with the 8MB image is rejected with an error without an attempt to load the older 4MB image on the adapter which could damage it.  This problem affects the following SR-IOV adapters:  #EC2R/#EC2S with CCIN 58FA; and #EC2T/#EC2U with CCIN 58FB. * A problem was fixed for incorrect recovery from a service processor mailbox error that was causing the system IPL to fail with the loss of all the PCIe links.  If this occurs, the system will normally re-IPL successfully.  * A problem was fixed for SR-IOV adapter failures when running in shared mode in a Huge Dynamic DMA Window (HDDW) slot.  I/O slots are enabled with HDDW by using the I/O Adapter Enlarged Capacity setting in the Advanced System Management Interface (ASMI).   This problem can be circumvented by moving the SR-IOV adapter to a non-HDDW slot, or alternatively, disabling HDDW on the system. * A problem was fixed for system termination for a re-IPL with power on with SRC B181E540 logged.  The system can be recovered by powering off and then IPLing.  This problem occurs infrequently and can be avoided by powering off the system between IPLs. System firmware changes that affect certain systems * For a shared memory partition,  a problem was fixed for Live Partition Mobility (LPM) migration hang after a Mover Service Partition (MSP) failover in the early part of the migration.  To recover from the hang, a migration stop command must be given on the HMC.  Then the migration can be retried. * For a shared memory partition,  a problem was fixed for Live Partition Mobility (LPM) migration failure to an indeterminate state.  This can occur if the Mover Service Partition (MSP) has a failover that occurs when the migrating partition is in the state of "Suspended."  To recover from this problem, the partition must be shutdown and restarted. * On a system with a Cloud Management Console and a HMC Cloud Connector, a problem was fixed for memory leaks in the Redfish server causing Out of Memory (OOM) resets of the service processor. * On a system with a partition with dedicated processors that are set to allow processor sharing with "Allow when partition is active" or "Allow always", a problem was fixed for a potential system hang if the partition is booting or shutting down while Dynamic Platform Optimizer (DPO) is running.  As a work-around to the problem, the processor sharing can be turned off before running DPO, or avoid starting or shutting down dedicated partitions with processor sharing while DPO is active. * On a system with an AMS partition, a problem was fixed for a Live Partition Mobility (LPM) migration failure when migrating from P9 to a pre-FW860 P8 or P7 system.  This failure can occur if the P9 partition is in dedicated memory mode, and the Physical Page Table (PPT) ratio is explicitly set on the HMC (rather than keeping the default value) and the partition is then transitioned to Active Memory Sharing (AMS) mode prior to the migration to the older system.  This problem can be avoided by using dedicated memory in the partition being migrated back to the older system. VM920_057_057 / FW920.10 09/24/18 Impact:  Data                  Severity:  HIPER New features and functions * DISRUPTIVE:  Support was added for installing and running mixed levels of P9 processors on the system in compatibility mode. * Support added for PCIe4 2-port 100Gb ROCE RN adapter with feature code #EC66 for AIX and IBM i.  This PCIe Gen4 Ethernet x16 adapter provides two 100 GbE QSFP28 ports. * Support was added to enable mirrored Hostboot memory. System firmware changes that affect all systems * HIPER/Non-Pervasive:   A problem was fixed for a potential problem that could result in undetected data corruption. * DEFERRED:   A problem was fixed for the Input Offset Voltage (VIO) to the processor being set too low, having less margin for PCIe and XBUS errors that could cause a higher than normal rate of processor or PCIe device failures during the IPL or at run time. * A problem was fixed for truncated firmware assisted dumps (fadump/kdump).  This can happen when the dumps are configured with chunks > 1Gb. * A problem was fixed for the default gateway in the Advanced System Management Interface (ASMI) IPv4 network configurations showing as 0.0.0.0 which is an invalid gateway IP address.  This problem can occur if ASMI is used to clear the gateway value with blanks.   * A problem was fixed for the Advanced System Management Interface (ASMI) displaying the IPv6 network prefix in decimal instead of hex character values.  The service processor command line "ifconfig" can be used to see the IPv6 network prefix value in hex as a circumvention to the problem. * A problem was fixed for link speed for PCIe Generation 4 adapters showing as "unknown"  in the Advanced System Management Interface (ASMI) PCIe Hardware Topology menu. * A problem was fixed for the system crashing on PCIe errors that result in guard action for the FRU. * A problem was fixed for an extraneous SRC B7000602 being logged intermittently when is the system is being powered off.  The trigger for the error log is a HMC request for information that does not complete before the system is shut down.  If the HMC sends certain commands to get capacity information (eg, 0x8001/0x0107) while the CEC is shutting down, the SFLPHMCCMD task can fail with this assertion.   This error log may be ignored. * A problem was fixed for the service processor Thermal Management not being made aware of a Power Management failure that the hypervisor had detected.  This could cause the system to go into Safe Mode with degraded performance if the error does not have recovery done. * A problem was fixed for the On-Chip Controller (OCC) being held in reset after a channel error for the memory.  The system would remain in Safe Mode (with degraded performance) until a re-IPL of the system. The trigger for the problem requires the memory channel checkstop and the OCC not being able to detect the error.  Both of these conditions are rare, making the problem unlikely to occur. * A problem was fixed for the memory bandwidth sensors for the P9 memory modules being off by a factor of 2.  As a workaround, divide memory sensor values by 2 to get a corrected value. * A problem was fixed for known bad DRAM bits having errors logs being generated repeatedly with each IPL.  With the fix, the error logs only occur one time at the initial failure and then thereafter the known bad DRAM bits are repaired as part of the normal memory initialization. * A problem was fixed for a Hostboot run time memory channel error where the processor could be called out erroneously instead of the memory DIMM.  For this error to happen, there must be a RCD parity error on the memory DIMM with a channel failure attention on the processor side of the bus and no channel failure attention on the memory side of the bus, and the system must recover from the channel failure. * A problem was fixed for DDR3 DIMM memory training where the ranks not being calibrated had their outputs enabled.  The JEDEC specification requires that the outputs be disabled.  Adding the termination settings on the non-calibrating ranks can improve memory margins ( thereby reduce the rate of memory failures), and it matches the memory training technique used for the DDR4 memory. * A problem was fixed for a PCIe2 4-port Slot Adapter with feature code #2E17  that cannot recover from a double EEH error if the second error occurs during the EEH recovery.  Because is a double-error scenario, the problem should be very infrequent. * A rare problem was fixed for slow downs in a Live Partition Mobility migration of a partition with Active Memory Sharing (AMS).  The AMS partition does not fail but the slower performance could cause time-outs in the workload if there are time constraints on the operations. * A problem was fixed for isolation of memory channel failure attentions on the processor side of the differential memory interface (DMI) bus.  This only is a problem if there are no attentions from the memory module side of the bus and it could cause the service processor run time diagnostics to get caught in hang condition, or result in a system checkstop with the processor called out. * A problem was fixed for the memory bandwidth sensors for the P9 memory modules sometimes being zero. * A problem was fixed for deconfiguring checkstopped processor cores at run time.  Without the fix, the processor core checkstop error could cause a checkstop of the system and a re-IPL,  or it could force the system into Safe Mode. * A problem was fixed for a failed TPM card preventing a system IPL, even after the card was replaced. * A problem was fixed for differential memory interface (DMI) lane sparing to prevent shutting down a good lane on the TX side of the bus when a lane has been spared on the RX side of the bus.  If the XBUS or DMI bus runs out of spare lanes, it can checkstop the system, so the fix helps use these resources more efficiently. * A problem was fixed for IPL failures with SRC BC50090F when replacing Xbus FRUs.  The problem occurs if VPD has a stale bad memory lane record and that record does not exist on both ends of the bus. * A problem was fixed for SR-IOV adapter dumps hanging with low-level EEH events causing failures on VFs of other non-target SR-IOV adapters. * A problem was fixed for SR-IOV VF configured with a PVID that fails to function correctly after a virtual function reset.  It will allow receiving untagged frames but not be able to transmit the untagged frames. * A problem was fixed for SR-IOV VFs, where a VF configured with a PVID priority may be presented to the OS with an incorrect priority value. * A problem was fixed for a Self Boot Engine (SBE) recoverable error at run time causing the system to go into Safe Mode. * A problem was fixed for a rare Live Partition Mobility migration hang with the partition left in VPM (Virtual Page Mode) which causes performance concerns.  This error is triggered by a migration failover operation occurring during the migration state of "Suspended" and there has to be insufficient VASI buffers available to clear all partition state data waiting to be sent to the migration target.  Migration failovers are rare and the migration state of "Suspended" is a migration state lasting only a few seconds for most partitions, so this problem should not be frequent.  On the HMC, there will be an inability to complete either a migration stop or a recovery operation.  The HMC will show the partition as migrating and any attempt to change that will fail.  The system must be re-IPLed to recover from the problem. * A problem was fixed for Self Boot Engine (SBE) failure data being collected from the wrong processor if the SBE is not running on processor 0.  This can result in the wrong FRU being called out for SBE failures. System firmware changes that affect certain systems * On systems which do not have a HMC attached,  a problem was fixed for a firmware update initiated from the OS from FW920.00 to FW920.10 that caused a system crash one hour after the code update completed.  This does not fix the case of the OS initiated firmware update back to FW920.00 from FW920.10 which will still result in a crash of the system.  Do not initiate a FW920.10  to FW920.00 code update via the operating system.  Use only HMC or USB methods of code update for this case.  If a HMC or USB code update is not an option,  please contact IBM support. * A problem was fixed for Linux or AIX partitions crashing during a firmware assisted dump or when using Linux kexec to restart with a new kernel.  This problem was more frequent for the Linux OS with kdump failing with "Kernel panic - not syncing: Attempted to kill init" in some cases. VM920_040_040 / FW920.00 08/20/18 Impact:  New      Severity:  New New Features and Functions * GA Level 4.0 How to Determine The Currently Installed Firmware Level You can view the server's current firmware level on the Advanced System Management Interface (ASMI) Welcome pane. It appears in the top right corner. Example: VM920_123. ---------------------------------------------------------------------------------- 5.0 Downloading the Firmware Package Follow the instructions on Fix Central. You must read and agree to the license agreement to obtain the firmware packages. Note: If your HMC is not internet-connected you will need to download the new firmware level to a USB flash memory device or ftp server. ---------------------------------------------------------------------------------- 6.0 Installing the Firmware The method used to install new firmware will depend on the release level of firmware which is currently installed on your server. The release level can be determined by the prefix of the new firmware's filename.Example: VMxxx_yyy_zzz Where xxx = release level * If the release level will stay the same (Example: Level VM910_040_040 is currently installed and you are attempting to install level VM910_041_040) this is considered an update. * If the release level will change (Example: Level VM900_040_040 is currently installed and you are attempting to install level VM910_050_050) this is considered an upgrade. Instructions for installing firmware updates and upgrades can be found at https://www.ibm.com/support/knowledgecenter/9040-MR9/p9eh6/p9eh6_updates_sys.htm IBM i Systems: For information concerning IBM i Systems, go to the following URL to access Fix Central:  http://www-933.ibm.com/support/fixcentral/ Choose "Select product", under Product Group specify "System i", under Product specify "IBM i", then Continue and specify the desired firmware PTF accordingly. 7.0 Firmware History The complete Firmware Fix History (including HIPER descriptions)  for this Release level can be reviewed at the following url: http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VM-Firmware-Hist.html 8.0 Change History Date Description September 19, 2019 Fix descripton update for firmware level VM920_112_101 / FW920.40. September 10, 2019 Fix description updates for firmware levels VM920_112_101 / FW920.40, VM920_101_101 / FW920.30 and VM920_057_057 / FW920.10.