01VH940_093_027.html Power9 System Firmware
Applies to: 9080-M9S
This document provides information about the installation of Licensed Machine
or Licensed Internal Code, which is sometimes referred to generically as
microcode or firmware.
----------------------------------------------------------------------------------
Contents
* 1.0 Systems Affected
* 1.1 Minimum HMC Code Level
* 2.0 Important Information
* 2.1 IPv6 Support and Limitations
* 2.2 Concurrent Firmware Updates
* 2.3 Memory Considerations for Firmware Upgrades
* 2.4 SBE Updates
* 3.0 Firmware Information
* 3.1 Firmware Information and Description Table
* 4.0 How to Determine Currently Installed Firmware Level
* 5.0 Downloading the Firmware Package
* 6.0 Installing the Firmware
* 7.0 Firmware History
----------------------------------------------------------------------------------
1.0 Systems Affected
This package provides firmware for Power Systems E980 (9080-M9S) servers only.
The firmware level in this package is:
* VH940_093 / FW940.41
----------------------------------------------------------------------------------
1.1 Minimum HMC Code Level
This section is intended to describe the "Minimum HMC Code Level" required
by the System Firmware to complete the firmware installation process. When
installing the System Firmware, the HMC level must be equal to or higher than
the "Minimum HMC Code Level" before starting the system firmware update. If
the HMC managing the server targeted for the System Firmware update is running
a code level lower than the "Minimum HMC Code Level" the firmware update will
not proceed.
The Minimum HMC Code levels for this firmware for HMC x86, ppc64 or ppc64le
are listed below.
x86 - This term is used to reference the legacy HMC that runs on
x86/Intel/AMD hardware for both the 7042 Machine Type appliances and the
Virtual HMC that can run on the Intel hypervisors (KVM, VMWare, Xen).
* The Minimum HMC Code level for this firmware is: HMC V9R1M940 (PTF MH01836
).
* Although the Minimum HMC Code level for this firmware is listed above, if
the HMC supportsV9R2, HMC V9R2M951.2 (PTF MH01892) or higher is recommended to
avoid an issue that can cause the HMC to lose connections to all servers for a
brief time with service events E2FF1409 and E23D040A being reported. This will
cause all running server tasks such as server firmware upgrade to fail.
ppc64 or ppc64le - describes the Linux code that is compiled to run on
Power-based servers or LPARS (Logical Partitions)
* The Minimum HMC Code level for this firmware is: HMC V9R1M940 (PTF MH01837
).
* Although the Minimum HMC Code level for this firmware is listed above,
V9R2, HMC V9R2M951.2 (PTF MH01893) or higher is recommended to avoid an issue
that can cause the HMC to lose connections to all servers for a brief time with
service events E2FF1409 and E23D040A being reported. This will cause all
running server tasks such as server firmware upgrade to fail.
For information concerning HMC releases and the latest PTFs, go to the
following URL to access Fix Central:
http://www-933.ibm.com/support/fixcentral/
For specific fix level information on key components of IBM Power Systems
running the AIX, IBM i and Linux operating systems, we suggest using the Fix
Level Recommendation Tool (FLRT):
http://www14.software.ibm.com/webapp/set2/flrt/home
NOTES:
-You must be logged in as hscroot in order for the firmware
installation to complete correctly.
- Systems Director Management Console (SDMC) does not support
this System Firmware level.
2.0 Important Information
Boot adapter microcode requirement
Update all adapters which are boot adapters, or which may be used as boot
adapters in the future, to the latest microcode from IBM Fix Central. The
latest microcode will ensure the adapters support the new Firmware Secure Boot
feature of Power Systems. This requirement applies when updating system
firmware from a level prior to FW940 to levels FW940 and later.
The latest adapter microcode levels include signed boot driver code. If a
boot-capable PCI adapter is not installed with the latest level of adapter
microcode, the partition which owns the adapter will boot, but error logs with
SRCs BA5400A5 or BA5400A6 will be posted. Once the adapter(s) are updated, the
error logs will no longer be posted.
Downgrading firmware from any given release level to an earlier release level
is not recommended
Firmware downgrade warnings:
1) Adapter feature codes (#EC2R/#EC2S/#EC2T/#EC2U and #EC3L/#EC3M and
#EC66/EC67) when configured in SR-IOV shared mode in FW930 or later, even if
originally configured in shared mode in a pre-FW930 release, may not function
properly if the system is downgraded to a pre-FW930 release. The adapter should
be configured in dedicated mode first (i.e. take the adapter out of SR-IOV
shared mode) before downgrading to a pre-FW930 release.
2) If partitions have been run in POWER9 compatibility mode in FW940, a
downgrade to an earlier release (pre-FW940) may cause a problem with the
partitions starting. To prevent this problem, the "server firmware" settings
must be reset by rebooting partitions in "Power9_base" before doing the
downgrade.
If you feel that it is necessary to downgrade the firmware on your system to
an earlier release level, please contact your next level of support.
2.1 IPv6 Support and Limitations
IPv6 (Internet Protocol version 6) is supported in the System Management
Services (SMS) in this level of system firmware. There are several limitations
that should be considered. When configuring a network interface card (NIC) for
remote IPL, only the most recently configured protocol (IPv4 or IPv6) is
retained. For example, if the network interface card was previously configured
with IPv4 information and is now being configured with IPv6 information, the
IPv4 configuration information is discarded.
A single network interface card may only be chosen once for the boot device
list. In other words, the interface cannot be configured for the IPv6 protocol
and for the IPv4 protocol at the same time.
2.2 Concurrent Firmware Updates
Concurrent system firmware update is supported on HMC Managed Systems only.
Ensure that there are no RMC connections issues for any system partitions
prior to applying the firmware update. If there is a RMC connection failure to
a partition during the firmware update, the RMC connection will need to be
restored and additional recovery actions for that partition will be required to
complete partition firmware updates.
2.3 Memory Considerations for Firmware Upgrades
Firmware Release Level upgrades and Service Pack updates may consume
additional system memory.
Server firmware requires memory to support the logical partitions on the
server. The amount of memory required by the server firmware varies according
to several factors.
Factors influencing server firmware memory requirements include the following:
* Number of logical partitions
* Partition environments of the logical partitions
* Number of physical and virtual I/O devices used by the logical
partitions
* Maximum memory values given to the logical partitions
Generally, you can estimate the amount of memory required by server firmware
to be approximately 8% of the system installed memory. The actual amount
required will generally be less than 8%. However, there are some server models
that require an absolute minimum amount of memory for server firmware,
regardless of the previously mentioned considerations.
Additional information can be found at:
https://www.ibm.com/support/knowledgecenter/9080-M9S/p9hat/p9hat_lparmemory.htm
2.4 SBE Updates
Power 9 servers contain SBEs (Self Boot Engines) and are used to boot the
system. SBE is internal to each of the Power 9 chips and used to "self boot"
the chip. The SBE image is persistent and is only reloaded if there is a
system firmware update that contains a SBE change. If there is a SBE change
and system firmware update is concurrent, then the SBE update is delayed to the
next IPL of the CEC which will cause an additional 3-5 minutes per processor
chip in the system to be added on to the IPL. If there is a SBE change and the
system firmware update is disruptive, then SBE update will cause an additional
3-5 minutes per processor chip in the system to be added on to the IPL. During
the SBE update process, the HMC or op-panel will display service processor code
C1C3C213 for each of the SBEs being updated. This is a normal progress code
and system boot should be not be terminated by the user. Additional time
estimate can be between 12-20 minutes per drawer or up to 48-80 minutes for
maximum configuration.
The SBE image is updated with this service pack.
----------------------------------------------------------------------------------
3.0 Firmware Information
Use the following examples as a reference to determine whether your
installation will be concurrent or disruptive.For systems that are not managed
by an HMC, the installation of system firmware is always disruptive.
Note: The concurrent levels of system firmware may, on occasion, contain
fixes that are known as Deferred and/or Partition-Deferred. Deferred fixes can
be installed concurrently, but will not be activated until the next IPL.
Partition-Deferred fixes can be installed concurrently, but will not be
activated until a partition reactivate is performed. Deferred and/or
Partition-Deferred fixes, if any, will be identified in the "Firmware Update
Descriptions" table of this document.For these types of fixes (Deferred and/or
Partition-Deferred) within a service pack, only the fixes in the service pack
which cannot be concurrently activated are deferred.
Note: The file names and service pack levels used in the following examples
are for clarification only, and are not necessarily levels that have been, or
will be released.
System firmware file naming convention:
01VHxxx_yyy_zzz
* xxx is the release level
* yyy is the service pack level
* zzz is the last disruptive service pack level NOTE: Values of service pack
and last disruptive service pack level (yyy and zzz) are only unique within a
release level (xxx). For example, 01VH900_040_040 and 01VH910_040_045 are
different service packs.
An installation is disruptive if:
* The release levels (xxx) are different. Example:
Currently installed release is 01VH900_040_040, new release is 01VH910_050_050.
* The service pack level (yyy) and the last disruptive service pack level
(zzz) are the same. Example: VH910_040_040 is disruptive, no
matter what level of VH910 is currently installed on the system.
* The service pack level (yyy) currently installed on the system is lower
than the last disruptive service pack level (zzz) of the service pack to be
installed. Example: Currently installed service pack is
VH910_040_040 and new service pack is VH910_050_045.
An installation is concurrent if:
The release level (xxx) is the same, and
The service pack level (yyy) currently installed on the system is the same or
higher than the last disruptive service pack level (zzz) of the service pack to
be installed.
Example: Currently installed service pack is VH910_040_040, new service pack
is VH910_041_040.
3.1 Firmware Information and Description
Filename Size Checksum md5sum 01VH940_093_027.rpm 145461689
54991
161fc6bc5edd8627e460382ef8c48f2d
Note: The Checksum can be found by running the AIX sum command against the
rpm file (only the first 5 digits are listed).
ie: sum 01VH940_093_027.rpm
VH940
For Impact, Severity and other Firmware definitions, Please refer to the
below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs
The complete Firmware Fix History for this Release Level can be reviewed at
the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html
VH940_093_027 / FW940.41
09/16/21 Impact: Data Severity: HIPER
System firmware changes that affect all systems
* HIPER: A problem was fixed which may occur on a target system following a
Live Partition Mobility (LPM) migration of an AIX partition utilizing Active
Memory Expansion (AME) with 64 KB page size enabled using the vmo tunable: "vmo
-ro ame_mpsize_support=1". The problem may result in AIX termination, file
system corruption, application segmentation faults, or undetected data
corruption.
Note: If you are doing an LPM migration of an AIX partition utilizing AME
and 64 KB page size enabled involving a POWER8 or POWER9 system, ensure you
have a Service Pack including this change for the appropriate firmware level on
both the source and target systems.
* HIPER/Pervasive: A problem was fixed for certain SR-IOV adapters in
Shared mode where multicast and broadcast packets were not properly routed out
to the physical port. This may result in network issues such as ping failure
or inability to establish TCP connections. This problem only affects the
SR-IOV adapters with the following Feature Codes and CCINs: #EC2R/EC2S with
CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CE; and #EC66/EC67
with CCIN 2CF3.
This problem was introduced by a fix delivered in the FW940.40 service pack.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* A problem was fixed for Live Partition Mobility (LPM) migrations from
non-trusted POWER9 systems to POWER10 systems. The LPM migration failure occurs
every time a LPM migration is attempted from a non-trusted system source to
FW1010 and later. For POWER9 systems, non-trusted is the default setting. The
messages shown on the HMC for the failure are the following:
HSCL365C The partition migration has been stopped because platform firmware
detected an error (041800AC).
HSCL365D The partition migration has been stopped because target MSP
detected an error (05000127).
HSCL365D The partition migration has been stopped because target MSP
detected an error (05000127).
A workaround for the problem is to enable the trusted system key on the
POWER9 FW940/FW950 source system which can be done using an intricate
procedure. Please contact IBM Support for help with this workaround.
System firmware changes that affect certain systems
* For a system with a partition running AIX 7.3, a problem was fixed for
running Live Update or Live Partition Mobility (LPM). AIX 7.3 supports Virtual
Persistent Memory (PMEM) but it cannot be used with these operations, but the
problem was making it appear that PMEM was configured when it was not. The
Live Update and LPM operations always fail when attempted on AIX 7.3. Here is
the failure output from a Live Update Preview:
"1430-296 FAILED: not all devices are virtual devices.
nvmem0
1430-129 FAILED: The following loaded kernel extensions are not known to be
safe for Live Update:
nvmemdd
...
1430-218 The live update preview failed.
0503-125 geninstall: The lvupdate call failed.
Please see /var/adm/ras/liveupdate/logs/lvupdlog for details." VH940_087_027
/ FW940.40
07/08/21 Impact: Availability Severity: SPE
New features and functions
* Support added to Redfish to provide a command to set the ASMI user
passwords using a new AccountService schema. Using this service, the ASMI
admin, HMC, and general user passwords can be changed.
* Support was changed to disable Service Location Protocol (SLP) by default
for newly shipped systems or systems that are reset to manufacturing defaults.
This change has been made to reduce memory usage on the service processor by
disabling a service that is not needed for normal system operations. This
change can be made manually for existing customers by changing it in ASMI with
the options "ASMI -> System Configuration -> Security -> External Services
Management" to disable the service.
System firmware changes that affect all systems
* A problem was fixed for the system going to a "password update required"
state on the HMC when downgrading from FW940 to FW930 service packs. This
problem is rare and can only happen if the passwords on the service processor
are set to the factory default values. The workaround to this problem is to
update the FSP user password on the HMC.
* A problem was fixed for a B150BA3C SRC callout that was refined by adding a
new isolation procedure to improve the accuracy of the repair and reduce its
impact on the system. One possible trigger for the problem could be a DIMM
failure in a node during an IPL with SRC BC200D01 logged followed by a
B150BA3C. Without the fix, the system backplane is called out and a node of
the system is deconfigured. With the fix, a new isolation service procedure
FSPSPA0 is added as a high priority callout and the system backplane callout is
made low priority.
"FSPSPA0- A problem has been detected in the Hostboot firmware.
1. Look for previous event log(s) for the same CEC drawer and replace that
hardware.
2. If no other event log exists, then submit all dumps and iqyylog for
review."
* A problem was fixed for processor cores in the first node not being shown
as deconfigured in ASMI when the first node is deconfigured. A workaround for
this issue is to go to the ASMI "Processor Deconfiguration" menu and navigate
to the second page to get the Processor Unit level details. By selecting the
Node 1 and "Continue" button, ASMI shows the correct information.
* A problem was fixed for an Out Of Memory (OOM) error on the primary service
processor when redundancy is lost and the backup service processor has failed.
This error causes a reset/reload of the remaining service processor. This
error is triggered by a flood of informational logs that can occur when links
are lost to a failed service processor.
* A problem was fixed for the ASMI panel function for Self -Boot Engine (SBE)
SEEPROM validation that was exhausting the /tmp space on some four-node
systems, causing a reset/reload of the service processor and a service
processor dump. The probability of hitting this failure is very low.
* A problem was fixed for Time of Day (TOD) being lost for the real-time
clock (RTC) with an SRC B15A3303 logged when the service processor boots or
resets. This is a very rare problem that involves a timing problem in the
service processor kernel. If the server is running when the error occurs,
there will be an SRC B15A3303 logged, and the time of day on the service
processor will be incorrect for up to six hours until the hypervisor
synchronizes its (valid) time with the service processor. If the server is not
running when the error occurs, there will be an SRC B15A3303 logged, and If the
server is subsequently IPLed without setting the date and time in ASMI to fix
it, the IPL will abort with an SRC B7881201 which indicates to the system
operator that the date and time are invalid.
* A problem was fixed for intermittent failures for a reset of a Virtual
Function (VF) for SR-IOV adapters during Enhanced Error Handling (EEH) error
recovery. This is triggered by EEH events at a VF level only, not at the
adapter level. The error recovery fails if a data packet is received by the VF
while the EEH recovery is in progress. A VF that has failed can be recovered
by a partition reboot or a DLPAR remove and add of the VF.
* A problem was fixed for performance degradation of a partition due to task
dispatching delays. This may happen when a processor chip has all of its
shared processors removed and converted to dedicated processors. This could be
driven by a DLPAR remove of processors or Dynamic Platform Optimization (DPO).
* A problem was fixed for a logical partition activation error that can occur
when trying to activate a partition when the adapter hardware for an SR-IOV
logical port has been physically removed or is unavailable due to a hardware
issue. This message is reported on the HMC for the activation failure:
"Error: HSCL12B5 The operation to remove SR-IOV logical port failed
because of the following error: HSCL1552 The firmware operation failed with
extended error" where the logical port number will vary. This is an infrequent
problem that is only an issue if the adapter hardware has been removed or
another problem makes it unavailable. The workaround for this problem is to
physically add the hardware back in or correct the hardware issue. If that
cannot be done, create an alternate profile for the logical partition without
the SR-IOV logical port and use that until the hardware issue is resolved.
* A problem was fixed for incomplete periodic data gathered by IBM Service
for #EMXO PCIe expansion drawer predictive error analysis. The service data is
missing the PLX (PCIe switch) data that is needed for the debug of certain
errors.
* A problem was fixed for a rare failure for an SPCN I2C command sent to a
PCIe I/O expansion drawer that can occur when service data is manually
collected with hypervisor macros "xmsvc -dumpCCData and xmsvc
-logCCErrBuffer". If the hypervisor macro "xmsvc "is run to gather service
data and a CMC Alert occurs at the same time that requires an SPCN command to
clear the alert, then the I2C commands may be improperly serialized, resulting
in an SPCN I2C command failure. To prevent this problem, avoid using xmsvc
-dumpCCData and xmsvc -logCCErrBuffer to collect service data until this fix is
applied.
* A problem was fixed for a system hang or terminate with SRC B700F105 logged
during a Dynamic Platform Optimization (DPO) that is running with a partition
in a failed state but that is not shut down. If DPO attempts to relocate a
dedicated processor from the failed partition, the problem may occur. This
problem can be avoided by doing a shutdown of any failed partitions before
initiating DPO.
* A problem was fixed for a system crash with HMC message HSCL025D and SRC
B700F103 logged on a Live Partition Mobility (LPM) inactive migration attempt
that fails. The trigger for this problem is inactive migration that fails a
compatibility check between the source and target systems.
* A problem was fixed for time-out issues in Power Enterprise Pools 1.0 (PEP
1.0) that can affect performance by having non-optimal assignments of
processors and memory to the server logical partitions in the pool. For this
problem to happen, the server must be in a PEP 1.0 pool and the HMC must take
longer than 2 minutes to provide the PowerVM hypervisor with the information
about pool resources owned by this server. The problem can be avoided by
running the HMC optmem command before activating the partitions.
* A problem was fixed for the Systems Management Services (SMS) menu " I/O
Device Information" option being incorrect when displaying the capacity for an
NVMe or Fibre Channel (FC) NVMe disk. This problem occurs every time the data
is displayed.
* A problem was fixed for an infrequent SRC of B7006956 that may occur during
a system power off. This SRC indicates that encrypted NVRAM locations failed
to synchronize with the copy in memory during the shutdown of the hypervisor.
This error can be ignored as the encrypted NVRAM information is stored in a
redundant location, so the next IPL of the system is successful.
* A problem was fixed for a misleading SRC B7006A20 (Unsupported Hardware
Configuration) that can occur for some error cases for PCIes #EMX0 expansion
drawers that are connected with copper cables. For cable unplug errors, the
SRC B7006A88 (Drawer TrainError) should be shown instead of the B7006A20. If
a B7006A20 is logged against copper cables with the signature "Prc
UnsupportedCableswithFewerChannels" and the message "NOT A 12CHANNEL CABLE",
this error should instead follow the service actions for a B7006A88 SRC.
* A problem was fixed for certain SR-IOV adapters not being able to create
the maximum number of VLANs that are supported for a physical port. There were
insufficient memory pages allocated for the physical functions for this adapter
type. The SR-IOV adapters affected have the following Feature Codes and
CCINs: #EC66/#EC67 with CCIN 2CF3.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* A problem was fixed for certain SR-IOV adapters that can have B400FF02 SRCs
logged with LPA dumps during a vNIC remove operation. The adapters can have
issues with a deadlock in managing memory pages. In most cases, the operations
should recover and complete. This fix updates the adapter firmware to
XX.29.2003 for the following Feature Codes and CCINs: #EC2R/EC2S with CCIN
58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CE; and #EC66/EC67 with
CCIN 2CF3.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* The following problems were fixed for certain SR-IOV adapters:
1) An error was fixed that occurs during a VNIC failover where the VNIC
backing device has a physical port down or read port errors with an SRC
B400FF02 logged.
2) A problem was fixed for adding a new logical port that has a PVID assigned
that is causing traffic on that VLAN to be dropped by other interfaces on the
same physical port which uses OS VLAN tagging for that same VLAN ID. This
problem occurs each time a logical port with a non-zero PVID that is the same
as an existing VLAN is dynamically added to a partition or is activated as part
of a partition activation, the traffic flow stops for other partitions with OS
configured VLAN devices with the same VLAN ID. This problem can be recovered
by configuring an IP address on the logical port with the non-zero PVID and
initiating traffic flow on this logical port. This problem can be avoided by
not configuring logical ports with a PVID if other logical ports on the same
physical port are configured with OS VLAN devices.
This fix updates the adapter firmware to 11.4.415.37 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
* A problem was fixed for some serviceable events specific to the reporting
of EEH errors not being displayed on the HMC. The sending of an associated
call home event, however, was not affected. This problem is intermittent and
infrequent.
* A problem was fixed for possible partition errors following a concurrent
firmware update from FW910 or later. A precondition for this problem is that
DLPAR operations of either physical or virtual I/O devices must have occurred
prior to the firmware update The error can take the form of a partition crash
at some point following the update. The frequency of this problem is low. If
the problem occurs, the OS will likely report a DSI (Data Storage Interrupt)
error. For example, AIX produces a DSI_PROC log entry. If the partition does
not crash, it is also possible that some subsequent I/O DLPAR operations will
fail.
* A problem was fixed for a missing hardware callout and guard for a
processor chip failure with SRC BC8AE540 and signature "ex(n0p0c5) (L3FIR[28])
L3 LRU array parity error".
* A problem was fixed for a missing hardware callout and guard for a
processor chip failure with Predictive Error (PE) SRC BC70E540 and signature
"ex(n1p2c6) (L2FIR[19]) Rc or NCU Pb data CE error". The PE error occurs after
the number of CE errors reaches a threshold of 32 errors per day.
* A problem was fixed for a Live Partition Mobility (LPM) migration that
failed with the error "HSCL3659 The partition migration has been stopped
because orchestrator detected an error" on the HMC. This problem is
intermittent and rare that is triggered by the HMC being overrun with unneeded
LPM message requests from the hypervisor that can cause a timeout in HMC
queries that result in the LPM operation being aborted. The workaround is to
retry the LPM migration which will normally succeed.
* A problem was fixed for a service processor mailbox ( mbox) timeout error
with SRC B182953C during the IPL of systems with large memory configurations
and "I/O Adapter Enlarged Capacity" enabled from ASMI. The error indicates
that the hypervisor did not respond quickly enough to a message from the
service processor but this may not result in an IPL failure. The problem is
intermittent, so if the IPL does fail, the workaround is to retry the IPL.
* A problem was fixed for service processor failovers that are not
successful, causing the HMC to lose communication to the hypervisor and go into
the "Incomplete" state. The error is triggered when multiple failures occur
during a service processor failover, resulting in an extra host or service
processor initiated reset/reload during the failover, which causes the PSI
links to be in the wrong state at the end of the process.
* Problems were fixed for DLPAR operations that change the uncapped weight of
a partition and DLPAR operations that switch an active partition from uncapped
to capped. After changing the uncapped weight, the weight can be incorrect.
When switching an active partition from uncapped to capped, the operation can
fail.
* A problem was fixed where the Floating Point Unit Computational Test, which
should be set to "staggered" by default, has been changed in some circumstances
to be disabled. If you wish to re-enable this option, this fix is required.
After applying this service pack, do the following steps:
1) Sign in to the Advanced System Management Interface (ASMI).
2) Select Floating Point Computational Unit under the System Configuration
heading and change it from disabled to what is needed: staggered (run once per
core each day) or periodic (a specified time).
3) Click "Save Settings".
* A problem was fixed for a system termination with SRC B700F107 following a
time facility processor failure with SRC B700F10B. With the fix, the
transparent replacement of the failed processor will occur for the B700F10B if
there is a free core, with no impact to the system.
* A problem was fixed for an SR-IOV adapter in shared mode configured as
Virtual Ethernet Port Aggregator (VEPA) where unmatched unicast packets were
not forwarded to the promiscuous mode VF.
Update instructions:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
System firmware changes that affect certain systems
* On systems with an IBM i partition, a problem was fixed for physical I/O
property data not being able to be collected for an inactive partition booted
in "IOR" mode with SRC B200A101 logged. This can happen when making a system
plan (sysplan) for an IBM i partition using the HMC and the IBM i partition is
inactive. The sysplan data collection for the active IBM i partitions is
successful.
* On systems with only Integrated Facility for Linux ( IFL) processors and
AIX or IBM i partitions, a problem was fixed for performance issues for IFL
VMs (Linux and VIOS). This problem occurs if AIX or IBM i partitions are
active on a system with IPL only cores. As a workaround, AIX or IBM i
partitions should not be activated on an IFL only system. With the fix, the
activation of AIX and IBM i partitions are blocked on an IFL only system. If
this fix is installed concurrently with AIX or IBM i partitions running, these
partitions will be allowed to continue to run until they are powered off. Once
powered off, the AIX and IBM i partitions will not be allowed to be activated
again on the IFL-only system. VH940_084_027 / FW940.32
05/25/21 Impact: Availability Severity: HIPER
New features and functions
* Support was added for Samsung DIMMs with part number 01GY853. If these
DIMMs are installed in a system with older FW940 firmware than FW940.32, the
DIMMs will fail and be guarded with SRC BC8A090F logged with HwpReturnCode
"RC_CEN_MBVPD_TERM_DATA_UNSUPPORTED_VPD_ENCODE".
System firmware changes that affect all systems
* HIPER/Pervasive: A problem was fixed for unrecoverable (UE) SRCs B150BA40
and B181BE12 being logged for a Hostboot TI (due to no actual fault), causing
nodes to be deconfigured and the system to re-IPL with reduced resources. The
problem can be triggered during a firmware upgrade or disruptive firmware
update. The problem can also occur on the first IPL after a concurrent
firmware update. The problem can also occur outside of a firmware update
scenario for some reconfiguration loops that can happen in Hostboot. There is
also a visible callout indicating one or more nodes/backplanes have a problem
which can lead to unnecessary repairs.
* HIPER/Pervasive: A problem was fixed for a checkstop due to an internal Bus
transport parity error or a data timeout on the Bus. This is a very rare
problem that requires a particular SMP transport link traffic pattern and
timing. Both the traffic pattern and timing are very difficult to achieve with
customer application workloads. The fix will have no measurable effect on most
customer workloads although highly intensive OLAP-like workloads may see up to
2.5% impact.
* A problem was fixed for an unrecoverable (UE) SRC B181BE12 being logged if
a service processor message acknowledgment is sent to a Hostboot instance in a
node that has already shutdown. This is a harmless error log and it should have
been marked as an informational log. VH940_074_027 / FW940.31
03/24/21 Impact: Availability Severity: SPE
System firmware changes that affect all systems
* A problem was fixed for a partition hang in shutdown with SRC B200F00F
logged. The trigger for the problem is an asynchronous NX accelerator job
(such as gzip or NX842 compression) in the partition that fails to clean up
successfully. This is intermittent and does not cause a problem until a
shutdown of the partition is attempted. The hung partition can be recovered by
performing an LPAR dump on the hung partition. When the dump has been
completed, the partition will be properly shut down and can then be restarted
without any errors. VH940_071_027 / FW940.30
02/04/21 Impact: Availability Severity: HIPER
New features and functions
* Support added to be able to set the NVRAM variable 'real-base' from the
Restricted OF Prompt (ROFP). Prior to the introduction of ROFP, customers had
the ability to set 'real-base' from the OF prompt. This capability was removed
in the initial delivery of ROFP in FW940.00. One use for this capability is
that, in some cases, OS images (usually Linux) need more memory to load their
image for boot. The OS image is loaded in between 0x4000 'load-base' and
0x2800000 'real-base'.
* Added support in ASMI for a new panel to do Self -Boot Engine (SBE) SEEPROM
validation. This validation can only be run at the service processor standby
state.
If the validation detects a problem, IBM recommends the system not be used
and that IBM service be called. System firmware changes that affect all
systems
* HIPER/Pervasive: A problem was fixed to be able to detect a failed PFET
sensing circuit in a core at runtime, and prevent a system fail with an
incomplete state when a core fails to wake up. The failed core is detected on
the subsequent IPL. With the fix. a core is called out with the PFET failure
with SRC BC13090F and hardware description "CME detected malfunctioning of PFET
headers." to isolate the error better with a correct callout.
* HIPER/Pervasive: A problem was fixed for soft error recovery not working
in the DPSS (Digital Power Subsystem Sweep) programmable power controller that
results in the DPSS being called out as a failed FRU. However, the DPSS is
recovered on the next IPL of the system. There is no impact to the running
system as there is a failover to a backup DPSS and the system continues running.
* DEFERRED: A problem was fixed for a system checkstop with an SRC BC14E540
logged that can occur during certain SMP cable failure scenarios. A re-IPL of
the system is needed to activate this fix.
* A problem was fixed for a slow down in PCIe adapter performance or loss of
adapter function caused by a reduction in interrupts available to service the
adapter. This problem can be triggered over time by partition activations or
DLPAR adds of PCIe adapters to a partition. This fix must be applied and the
system re-IPLed for existing adapter performance problems to be resolved.
However, the fix will prevent future issues without re-ipl if applied before
the problem is observed.
* A problem was fixed for system UPIC cable validation not being able to
detect cross-plugged UPIC cables. If the cables are plugged incorrectly and
there is a need for service, modifying the wrong FRU locations can have adverse
effects on the system, including system outage. For a concurrent update, the
UPIC cables should be manually verified using ASMI after the fix is applied.
For a disruptive update, UPIC cable validation occurs automatically during
system power on.
* A problem was fixed for the error message severity for a DPSS (Digital
Power Subsystem Sweep) programmable power controller corruption problem with
SRC 1100D00C logged. The final error log for a corruption problem was being
issued as an Informational SRC when it should have been a Predictive SRC.
* A problem was fixed for not logging SRCs for certain cable pulls from the
#EMXO PCIe expansion drawer. With the fix, the previously undetected cable
pulls are now detected and logged with SRC B7006A8B and B7006A88 errors.
* A problem was fixed for a system hang and HMC "Incomplete" state that may
occur when a partition hangs in shutdown with SRC B200F00F logged. The trigger
for the problem is an asynchronous NX accelerator job (such as gzip or NX842
compression) in the partition that fails to clean up successfully. This is
intermittent and does not cause a problem until a shutdown of the partition is
attempted.
* A problem was fixed for an IPL failure with SRC BC10E504 logged. This may
occur if there is a deconfiguration of the first node in the system (node 0) or
a deconfiguration of processor chip 0 in the first node. The workaround to this
problem is to change the hardware configuration to ensure that the first
processor chip in the first node is configured.
* A problem was fixed for a VIOS, AIX, or Linux partition hang during an
activation at SRC CA000040. This will occur on a system that has been running
for more than 814 days when the boot of the partition is attempted if the
partitions are in POWER9_base or POWER9 processor compatibility mode.
A workaround to this problem is to re-IPL the system or to change the failing
partition to POWER8 compatibility mode.
* A problem was fixed for performance tools perfpmr, tprof and pex that may
not be able to collect data for the event based options.
This can occur any time an OS thread becomes idle. When the processor cores
are assigned to the next active process, the performance registers may be
disabled.
* A problem was fixed for a rare system hang with SRC BC70E540 logged that
may occur when adding processors through licensing or the system throttle state
changing (becoming throttled or unthrottled) on an Enterprise Pool system. The
trigger for the problem is a very small timing window in the hardware as the
processor loads are changing.
* A problem was for an intermittent anchor card timeout with Informational
SRC B7009020 logged when reading TPM physical storage from the anchor card.
There is no customer impact for this problem as long as NVRAM is accessible.
* A problem was fixed for the On-Chip Controller (OCC) going into safe mode
(causes loss of processor performance) with SRC BC702616 logged. This problem
can be triggered by the loss of a power supply (an oversubscription event). The
problem can be circumvented by fixing the issue with the power supply.
* A problem was fixed for the error handling of a rare DIMM VPD error that
causes incorrect logging of SRCs B1232A09 and B1561314, calling out the system
planar, processor chip, Centaur DIMM controller, and riser card FRUs that
actually do not need replacement.
* A problem was fixed for the error handling of a system with an unsupported
memory configuration that exceeds available memory power. Without the fix, the
IPL of the system is attempted and fails with a segmentation fault with SRCs
B1818611 and B181460B logged that do not call out the incorrect DIMMs.
* A problem was fixed for the Self Boot Engine (SBE) going to termination
with an SRC B150BA8D logged when booting on a bad core. Once this happens, this
error will persist as the bad core is not deconfigured. To recover from this
error and be able to IPL, a failover can be done to the backup service
processor and IPL from there. With the fix, the failing core is deconfigured
and the SBE is reconfigured to use another core so the system is able to IPL.
* A problem was fixed for certain SR-IOV adapters that have a rare,
intermittent error with B400FF02 and B400FF04 logged, causing a reboot of the
VF. The error is handled and recovered without any user intervention needed.
The SR-IOV adapters affected have the following Feature Codes and CCINs:
#EC2R/#EC2S with CCIN 58FA; #EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN
2CE; and #EC66/#EC67 with CCIN 2CF3.
* A problem was fixed for certain Power Interface Board (PIB) errors with
BC200D01 logged not causing a callout and deconfiguration of the failing FRU.
This problem can result in an entire node failing to IPL instead of just having
the failing FRU deconfigured.
* A problem was fixed for Live Partition Mobility (LPM) being shown as
enabled at the OS when it has been disabled by the ASMI command line using the
server processor command of "cfcuod -LPM OFF". LPM is actually disabled and
the status shows correctly on the HMC. The status on the OS can be ignored
(for example as shown by the AIX command "lparstat -L") as LPM will not be
allowed to run when it is disabled.
* A problem was fixed for an SRC B7006A99 informational log now posted as a
Predictive with a call out of the CXP cable FRU. This fix improves FRU
isolation for cases where a CXP cable alert causes a B7006A99 that occurs prior
to a B7006A22 or B7006A8B. Without the fix, the SRC B7006A99 is informational
and the latter SRCs cause a larger hardware replacement even though the earlier
event identified a probable cause for the cable FRU.
System firmware changes that affect certain systems
* On systems with an uncapped shared processor partition in POWER9 processor
compatibility mode. a problem was fixed for a system hang following Dynamic
Platform Optimization (DPO), memory mirroring defragmentation, or memory
guarding that happens as part of memory error recovery during normal operations
of the system.
* On systems with a partition using Virtual Persistent Memory (vPMEM) LUNS
configured with a 16 MB MPSS (Multiple Page Segment Size) mapping, a problem
was fixed for temporary system hangs. The temporary hang may occur while the
memory is involved in memory operations such as Dynamic Platform Optimization
(DPO), memory mirroring defragmentation, or memory guarding that happens as
part of memory error recovery during normal operations of the system.
* On systems with partitions having user mode enabled for the External
Interrupt Virtualization Engine (XIVE), a problem was fixed for a possible
system crash and HMC "Incomplete" state when a force DLPAR remove of a PCIe
adapter occurs after a dynamic LPAR (DLPAR) operation fails for that same PCIe
adapter.
* On systems with an IBM i partition, a problem was fixed for only seeing 50%
of the total Power Enterprise Pools (PEP) 1.0 memory that is provided. This
happens when querying resource information via QAPMCONF which calls MATMATR
0x01F6. With the fix, an error is corrected in the IBM i MATMATR option 0X01F6
that retrieves the memory information for the Collection Services.
VH940_061_027 / FW940.20
09/24/20 Impact: Data Severity: HIPER
New features and functions
* DEFERRED: Host firmware support for anti-rollback protection. This feature
implements firmware anti-rollback protection as described in NIST SP 800-147B
"BIOS Protection Guidelines for Servers". Firmware is signed with a "secure
version". Support added for a new menu in ASMI called "Host firmware security
policy" to update this secure version level at the processor hardware. Using
this menu, the system administrator can enable the "Host firmware secure
version lock-in" policy, which will cause the host firmware to update the
"minimum secure version" to match the currently running firmware. Use the
"Firmware Update Policy" menu in ASMI to show the current "minimum secure
version" in the processor hardware along with the "Minimum code level
supported" information. The secure boot verification process will block
installing any firmware secure version that is less than the "minimum secure
version" maintained in the processor hardware.
Prior to enabling the "lock-in" policy, it is recommended to accept the
current firmware level.
WARNING: Once lock-in is enabled and the system is booted, the "minimum
secure version" is updated and there is no way to roll it back to allow
installing firmware releases with a lesser secure version. System firmware
changes that affect all systems
* HIPER/Pervasive: A problem was fixed for certain SR-IOV adapters for a
condition that may result from frequent resets of adapter Virtual Functions
(VFs), or transmission stalls and could lead to potential undetected data
corruption.
The following additional fixes are also included:
1) The VNIC backing device goes to a powered off state during a VNIC failover
or Live Partition Mobility (LPM) migration. This failure is intermittent and
very infrequent.
2) Adapter time-outs with SRC B400FF01 or B400FF02 logged.
3) Adapter time-outs related to adapter commands becoming blocked with SRC
B400FF01 or B400FF02 logged
4) VF function resets occasionally not completing quickly enough resulting in
SRC B400FF02 logged.
This fix updates the adapter firmware to 11.4.415.33 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed to reduce excessive fan noise when systems are
operating at a low ambient temperature below 25 C. This affects Dynamic
Performance Mode (DPM) and Max Performance Mode (MPM) where the minimum fan
speed is lowered to 7200 RPMs if the ambient temperature of the system is below
25 C.
* A problem was fixed for the REST/Redfish interface to change the success
return code for object creation from "200" to "201". The "200" status code
means that the request was received and understood and is being processed. A
"201" status code indicates that a request was successful and, as a result, a
resource has been created. The Redfish Ruby Client, "redfish_client" may fail
a transaction if a "200" status code is returned when "201" is expected.
* A problem was fixed to allow quicker recovery of PCIe links for the #EMXO
PCIe expansion drawer for a run-time fault with B7006A22 logged. The time for
recovery attempts can exceed six minutes on rare occasions which may cause I/O
adapter failures and failed nodes. With the fix, the PCIe links will recover
or fail faster (in the order of seconds) so that redundancy in a cluster
configuration can be used with failure detection and failover processing by
other hosts, if available, in the case where the PCIe links fail to recover.
* A problem was fixed for a concurrent maintenance "Repair and Verify" (R&V)
operation for a #EMX0 fanout module that fails with an "Unable to isolate the
resource" error message. This should occur only infrequently for cases where a
physical hardware failure has occurred which prevents access to slot power
controls. This problem can be worked around by bringing up the "PCIe Hardware
Topology" screen from either ASMI or the HMC after the hardware failure but
before the concurrent repair is attempted. This will avoid the problem with the
PCIe slot isolation. These steps can also be used to recover from the error to
allow the R&V repair to be attempted again.
* A problem was fixed for a rare system hang that can occur when a page of
memory is being migrated. Page migration (memory relocation) can occur for a
variety of reasons, including predictive memory failure, DLPAR of memory, and
normal operations related to managing the page pool resources.
* A problem was fixed for utilization statistics for commands such as HMC
lslparutil and third-party lpar2rrd that do not accurately represent CPU
utilization. The values are incorrect every time for a partition that is
migrated with Live Partition Mobility (LPM). Power Enterprise Pools 2.0 is not
affected by this problem. If this problem has occurred, here are three possible
recovery options:
1) Re-IPL the target system of the migration.
2) Or delete and recreate the partition on the target system.
3) Or perform an inactive migration of the partition. The cycle values get
zeroed in this case.
* A problem was fixed for running PCM on a system with SR-IOV adapters in
shared mode that results in an "Incomplete" system state with certain
hypervisor tasks deadlocked. This problem is rare and is triggered when using
SR-IOV adapters in shared mode and gathering performance statistics with PCM
(Performance Collection and Monitoring) and also having a low-level error on an
adapter. The only way to recover from this condition is to re-IPL the system.
* A problem was fixed for an enhanced PCIe expansion drawer FPGA reset
causing EEH events from the fanout module or cable cards that disrupt the PCIe
lanes for the PCIe adapters. This problem affects systems with the PCIe
expansion drawer enhanced fanout module (#EMXH) and the enhanced cable card
(#EJ19).
The error is associated with the following SRCs being logged:
B7006A8D with PRC 37414123 (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::LocalFpgaHwReset)
B7006A8E with PRC 3741412A (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::RemoteFpgaHwReset)
If the EEH errors occur, the OS device drivers automatically recover but with
a reset of affected PCIe adapters that would cause a brief interruption in the
I/O communications.
* A problem was fixed for the FRU callout lists for SRCs B7006A2A and
B7006A2B possibly not including the FRU containing the PCIe switch as the
second FRU in the callout list. The card/drive in the slot is the first callout
and the FRU containing the PCIe switch should be the second FRU in the callout
list. This problem occurs when the PCIe slot is on a different planar that the
PCIe switch backing the slot. This impacts the NVMe backplanes (P2 with slots
C1-C4) hosting the PCIe backed SSD NVMe U.2 modules that have feature codes
#EC5J and #EC5K. As a workaround for B7006A2A and B7006A2B errors where the
callout FRU list is processed and the problem is not resolved, consider
replacing the backplane (which includes the PCIe switch) if this was omitted in
the FRU callout list.
* A problem was fixed for a PCIe3 expansion drawer cable that has hidden
error logs for a single lane failure. This happens whenever a single lane error
occurs. Subsequent lane failures are not hidden and have visible error logs.
Without the fix, the hidden or informational logs would need to be examined to
gather more information for the failing hardware.
* A problem was fixed for an infrequent issue after a Live Partition Mobility
(LPM) operation from a POWER9 system to a POWER8 or POWER7 system. The issue
may cause unexpected OS behavior, which may include loss of interrupts, device
time-outs, or delays in dispatching. Rebooting the affected target partition
will resolve the problem.
* A problem was fixed for a partition crash or hang following a partition
activation or a DLPAR add of a virtual processor. For partition activation,
this issue is only possible for a system with a single partition owning all
resources. For DLPAR add, the issue is extremely rare.
* A problem was fixed for a DLPAR remove of memory from a partition that
fails if the partition contains 65535 or more LMBs. With 16MB LMBs, this error
threshold is 1 TB of memory. With 256 MB LMBs, it is 16 TB of memory. A reboot
of the partition after the DLPAR will remove the memory from the partition.
* A problem was fixed for an IPL failure with SRC BA180020 logged for an
initialization failure on a PCIe adapter in a PCIe3 expansion drawer. The PCIe
adapters that are intermittently failing on the PCIe probe are the PCIe2 4-port
Fibre Channel Adapter with feature code #5729 and the PCIe2 4-port 1 Gb
Ethernet Adapter with feature code #5899. The failure can only occur on an IPL
or re-IPL and it is very infrequent. The system can be recovered with a re-IPL.
* A problem was fixed for a partition configured with a large number
(approximately 64) of Virtual Persistent Memory (PMEM) LUNs hanging during the
partition activation with a CA00E134 checkpoint SRC posted. Partitions
configured with approximately 64 PMEM LUNs will likely hang and the greater the
number of LUNs, the greater the possibility of the hang. The circumvention to
this problem is to reduce the number of PMEM LUNs to 64 or less in order to
boot successfully. The PMEM LUNs are also known as persistent memory volumes
and can be managed using the HMC. For more information on this topic, refer to
https://www.ibm.com/support/knowledgecenter/POWER9/p9efd/p9efd_lpar_pmem_settings.htm
.
* A problem was fixed for non-optimal On-Chip Controller (OCC) processor
frequency adjustments when system power limits or user power caps are
exceeded. When a workload causes power limits or caps to be exceeded, there
can be large frequency swings for the processors and a processor chip can get
stuck at minimum frequency. With the fix, the OCC now waits for new power
readings when changing the processor frequency and uses a master power capping
frequency to keep all processors at the same frequency. As a workaround for
this problem, do not set a power cap or run a workload that would exceed the
system power limit.
* A problem was fixed for PCIe resources under a deconfigured PCIe Host
Bridge (PHB) being shown on the OS host as available resources when they should
be shown as deconfigured. While this fix can be applied concurrently, a re-IPL
of the system is needed to correct the state of the PCIe resources if a PHB had
already been deconfigured.
* A problem was fixed for IBM Power Enterprise Pools 2.0 (PEP 2.0) where the
HMC "Throttling" flag, once activated, stays on even after the system is back
in compliance. The problem is triggered if PEP 2.0 has expired or is out of
compliance. When the system is back in compliance, the throttling stops but the
HMC still displays "Throttling" as "ON". This problem is extremely infrequent
as normal PEP 2.0 usage should not need throttling. The "Throttling " flag will
get turned off if the system gets an updated PEP 2.0 renewal key.
* A problem was fixed for incorrect run-time deconfiguration of a processor
core with SRC B700F10B. This problem can be circumvented by a reconfiguration
of the processor core but this should only be done with the guidance of IBM
Support to ensure the core is good.
* A problem was fixed for certain SR-IOV adapter errors where a B400F011 is
reported instead of a more descriptive B400FF02 or B400FF04. The LPA dump
still happens which can be used to isolate to the issue. The SR-IOV adapters
affected have the following Feature Codes and CCINs: #EC2R/#EC2S with CCIN
58FA; #EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN 2CE; and #EC66/#EC67
with CCIN 2CF3.
* A problem was fixed for mixing modes on the ports of SR-IOV adapters that
causes SRC B200A161, B200F011, B2009014 and B400F104 to be logged on boot of
the failed adapter. This error happens when one port of the adapter is changed
to option 1 with a second port set at either option 0 or option 2. The error
can be cleared by taking the adapter out of SR-IOV shared mode. The SR-IOV
adapters affected have the following Feature Codes and CCINs: #EC2R/#EC2S with
CCIN 58FA; #EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN 2CE; and
#EC66/#EC67 with CCIN 2CF3.
* A problem was fixed for certain SR-IOV adapters with the following issues:
1) The VNIC backing device goes to a powered off state during a VNIC failover
or Live Partition Mobility (LPM) migration. This failure is intermittent and
very infrequent.
2) Adapter time-outs with SRC B400FF01 or B400FF02 logged.
3) Adapter time-outs related to adapter commands becoming blocked with SRC
B400FF01 or B400FF02 logged.
4) VF function resets occasionally not completing quickly enough resulting in
SRC B400FF02 logged.
This fix updates the adapter firmware to 11.4.415.33 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with CCIN
2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and #EN0K/#EN0L
with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed for Novalink-created virtual ethernet and vNIC adapters
having incorrect SR-IOV Hybrid Network Virtualization (HNV) values. The AIX and
other OS hosts may be unable to use the adapters. This happens for all virtual
ethernet and vNIC adapters created by Novalink in the FW940 releases up to the
FW940.10 service pack. The fix will correct the settings for new Novalink
created virtual adapters, but any pre-existing virtual adapters created by
Novalink in FW940 must be deleted and recreated.
* A problem was fixed for partitions configured to run as AIX, VIOS, or Linux
partitions that also own specific Fibre Channel (FC) I/O adapters (see below)
are subject to a partition crash during boot if the partition does not already
have a boot list. During the initial boot of a new partition (containing 577F,
578E, 578F or 579B adapters), the boot might fail with one of the following
reference codes: BA210001, BA218001, BA210003, or BA218003. This most often
occurs on deployments of new partitions that are booting for the first time for
either a network install or booting to the Open Firmware prompt or SMS menus
for the first time. The issue requires that the partition owns one or more of
the following FC adapters and that these adapters are running at microcode
firmware levels older than version 11.4.415.5:
- Feature codes #EN1C,/#EN1D and #EL5X/#EL5W with CCIN 578E
- Feature codes #EN1A/# EN1B and #EL5U/#EL5V with CCIN 578F
- Feature codes #EN0A,/#EN0B and #EL5B/#EL43 with CCIN 577F
The frequency of the problem is somewhat rare because it requires the
following:
- Partition does not already have a default boot list
- Partition configured with one of the FC adapters listed above
- The FC adapters must be running a version of microcode with
unsigned/unsecure adapter microcode
The following work around was created for systems having this issue:
https://www.ibm.com/support/pages/node/1367103.
With the fix, the FC adapters are given a temporary substitute for the FCode
on the adapter but not the entire microcode image. The adapter microcode is not
updated. This workaround is done so the system can boot from the adapter
until the adapter can be updated by the customer with the latest available
microcode from IBM Fix Central. In the meantime, the FCode substitution is made
from the 12.4.257.15 level of the microcode.
* A problem was fixed for mixing memory DIMMs with different timings
(different vendors) under the same memory controller that fail with an SRC
BC20E504 error and DIMMs deconfigured. This is an "MCBIST_BRODCAST_OUT_OF_SYNC"
error. The loss of memory DIMMs can result in an IPL failure. This problem
can happen if the memory DIMMs have a certain level of timing differences. If
the timings are not compatible, the failure will occur on the IPL during the
memory training. To circumvent this problem, each memory controller should have
only memory DIMMs from the same vendor plugged.
* A problem was fixed for the SR-IOV logical port of an I/O adapter logging
a B400FF02 error because of a time-out waiting on a response from the firmware.
This rare error requires a very heavily loaded system. For this error, word 8
of the error log is 80090027. No user intervention is needed for this error as
the logical port recovers and continues with normal operations.
* A problem was fixed for a security vulnerability for the Self Boot Engine
(SBE). The SBE can be compromised from the service processor to allow
injection of malicious code. An attacker that gains root access to the service
processor could compromise the integrity of the host firmware and bypass the
host firmware signature verification process. This compromised state can not be
detected through TPM attestation. This is Common Vulnerabilities and Exposures
issue number CVE-2021-20487. System firmware changes that affect certain systems
* On systems with an IBM i partition, a problem was fixed for a dedicated
memory IBM i partition running in P9 processor compatibility mode failing to
activate with HSCL1552 "the firmware operation failed with extended error".
This failure only occurs under a very specific scenario - the new amount of
desired memory is less than the current desired memory, and the Hardware Page
Table (HPT) size needs to grow.
* On systems with AIX and Linux partitions, a problem was fixed for AIX and
Linux partitions that crash or hang when reporting any of the following
Partition Firmware RTAS ASSERT rare conditions:
1) SRC BA33xxxx errors - Memory allocation and management errors.
2) SRC BA29xxxx errors - Partition Firmware internal stack errors.
3) SRC BA00E8xx errors - Partition Firmware initialization errors during
concurrent firmware update or Live Partition Mobility (LPM) operations.
This problem should be very rare. If the problem does occur, a partition
reboot is needed to recover from the error. VH940_050_027 / FW940.10
05/22/20 Impact: Availability Severity: SPE
New features and functions
* Support was added for redundant VPD EEPROMs. If the primary module VPD
EEPROM fails, the system will automatically change to the backup module.
* Enable periodic logging of internal component operational data for the
PCIe3 expansion drawer paths. The logging of this data does not impact the
normal use of the system.
* Support added for SR-IOV Hybrid Network Virtualization (HNV) in a
production environment (no longer a Technology Preview) for AIX and IBM i.
This capability allows an AIX or IBM i partition to take advantage of the
efficiency and performance benefits of SR-IOV logical ports and participate in
mobility operations such as active and inactive Live Partition Mobility (LPM)
and Simplified Remote Restart (SRR). HNV is enabled by selecting a new
Migratable option when an SR-IOV logical port is configured. The Migratable
option is used to create a backup virtual device. The backup virtual device
can be either a Virtual Ethernet adapter or a virtual Network Interface
Controller (vNIC) adapter. In addition to this firmware HNV support in a
production environment requires HMC 9.1.941.0 or later, AIX Version 7.2 with
the 7200-04 Technology Level and Service Pack 7200-04-02-2015 or AIX Version
7.1 with the 7100-05 Technology Level and Service Pack 7100-05-06-2015, IBM i
7.3 TR8 or IBM i 7.4 TR2, and VIOS 3.1.1.20.
System firmware changes that affect all systems
* DEFERRED: A problem was fixed for a processor core failure with SRCs
B150BA3C and BC8A090F logged that deconfigures the entire processor for the
current IPL. A re-IPL of the system will recover the lost processor with only
the bad core guarded.
* A problem was fixed for Performance Monitor Unit (PMU) events that had the
incorrect Alink address (Xlink data given instead) that could be seen in 24x7
performance reports. The Alink event data is a recent addition for FW940 and
would not have been seen at the earlier firmware levels.
* A problem was fixed for an SR-IOV adapter hang with B400FF02/B400FF04
errors logged during firmware update or error recovery. The adapter may
recover after the error log and dump, but it is possible the adapter VF will
remain disabled until the partition using it is rebooted. This affects the
SR-IOV adapters with the following feature codes and CCINs: #EC2R/EC2S with
CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and
#EC66/EC67 with CCIN 2CF3.
* A problem was fixed for a failed clock card causing a node to be guarded
during the IPL of a multi-node system. With the fix, the redundant clock card
allows all the nodes to IPL in the case of a single clock card failure.
* A problem was fixed for the green power LED on the System Control Unit
(SCU) not being lit even though the system is powered on. Without the fix, the
LED is always in the off state.
* A problem was fixed for a loss of service processor redundancy after a
failover to the backup on a Hostboot IPL error. Although the failover is
successful to the backup service processor, the original primary service
processor may terminate. The failed service processor can be recovered from
termination by using a soft reset from ASMI.
* A problem was fixed for extraneous B400FF01 and B400FF02 SRCs logged when
moving cables on SR-IOV adapters. This is an infrequent error that can occur
if the HMC performance monitor is running at the same time the cables are
moved. These SRCs can be ignored when accompanied by cable movement.
* A problem was fixed for certain SR-IOV adapters that can have B400FF02 SRCs
logged with LPA dumps during Live Partition Mobility (LPM) migrations or vNIC
failovers. The adapters can have issues with a deadlock on error starts after
many resets of the VF and errors in managing memory pages. In most cases, the
operations should recover and complete. This fix updates the adapter firmware
to 1X.25.6100 for the following Feature Codes and CCINs: #EC2R/EC2S with CCIN
58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CE; and #EC66/EC67 with
CCIN 2CF3.
The SR-IOV adapter firmware level update for the shared-mode adapters happens
under user control to prevent unexpected temporary outages on the adapters. A
system reboot will update all SR-IOV shared-mode adapters with the new firmware
level. In addition, when an adapter is first set to SR-IOV shared mode, the
adapter firmware is updated to the latest level available with the system
firmware (and it is also updated automatically during maintenance operations,
such as when the adapter is stopped or replaced). And lastly, selective manual
updates of the SR-IOV adapters can be performed using the Hardware Management
Console (HMC). To selectively update the adapter firmware, follow the steps
given at the IBM Knowledge Center for using HMC to make the updates:
https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm
.
Note: Adapters that are capable of running in SR-IOV mode, but are currently
running in dedicated mode and assigned to a partition, can be updated
concurrently either by the OS that owns the adapter or the managing HMC (if OS
is AIX or VIOS and RMC is running).
* A problem was fixed where SR-IOV adapter VFs occasionally failed to
provision successfully on the low-speed ports (1 Gbps) with SRC B400FF04
logged, or SR-IOV adapter VFs occasionally failed to provision successfully
with SRC B400FF04 logged when the RoCE option is enabled.
This affects the adapters with low speed ports (1 Gbps) with the following
Feature Codes and CCINs: #EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN
2CC0, and #EN0K/EN0L with CCIN 2CC1. And it affects the adapters with the
ROCE option enabled with the following feature codes and CCINs: #EC2R/EC2S
with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and
#EC66/EC67 with CCIN 2CF3.
* A problem was fixed for an expired trial or elastic Capacity on Demand (
CoD) memory not warning of the use of unlicensed memory if the memory is not
returned. This lack of warning can occur if the trial memory has been allocated
as Virtual Persistent Memory (vPMEM).
* A problem was fixed for a B7006A96 fanout module FPGA corruption error that
can occur in unsupported PCIe3 expansion drawer(#EMX0) configurations that mix
an enhanced PCIe3 fanout module (#EMXH) in the same drawer with legacy PCIe3
fanout modules (#EMXF, #EMXG, #ELMF, or #ELMG). This causes the FPGA on the
enhanced #EMXH to be updated with the legacy firmware and it becomes a
non-working and unusable fanout module. With the fix, the unsupported #EMX0
configurations are detected and handled gracefully without harm to the FPGA on
the enhanced fanout modules.
* A problem was fixed for possible dispatching delays for partitions running
in POWER8, POWER9_base or POWER9 processor compatibility mode.
* A problem was fixed for system memory not returned after create and delete
of partitions, resulting in slightly less memory available after configuration
changes in the systems. With the fix, an IPL of the system will recover any of
the memory that was orphaned by the issue.
* A problem was fixed for failover support for the Mover Service Partition
(MSP) where a failover to the MSP partner during an LPM could cause the
migration to abort. This vulnerability is only for a very specific window in
the migration process. The recovery is to restart the migration operation.
* A rare problem was fixed for a checkstop during an IPL that fails to
isolate and guard the problem core. An SRC is logged with B1xxE5xx and an
extended hex word 8 xxxxDD90. With the fix, the failing hardware is guarded
and a node is possibly deconfigured to allow the subsequent IPLs of the system
to be successful.
* A problem was fixed for a hypervisor error during system shutdown where a
B7000602 SRC is logged and the system may also briefly go "Incomplete" on the
HMC but the shutdown is successful. The system will power back on with no
problems so the SRC can be ignored if it occurred during a shutdown.
* A problem was fixed for certain NVRAM corruptions causing a system crash
with a bad pointer reference instead of expected Terminate Immediate (TI) with
B7005960 logged.
* A problem was fixed for certain SR-IOV adapters that do not support the
"Disable Logical Port" option from the HMC but the HMC was allowing the user to
select this, causing incorrect operation. The invalid state of the logical
port causes an "Enable Logical Port" to fail in a subsequent operation. With
the fix, the HMC provides the message that the "Disable Logical Port" is not
supported for the adapter. This affects the adapters with the following
Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3, #EN17/EN18 with CCIN 2CE4,
#EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN 2CC0, and #EN0K/EN0L with CCIN
2CC1.
* A problem was fixed for clock card errors not being called out in the error
log when the primary clock card fails. This problem makes it more difficult
for the system user to be aware that clock card redundancy has been lost, and
that service is needed to restore the redundancy.
* A problem was fixed to remove unneeded resets of a VF for SR-IOV adapters,
providing for improved performance of the startup or recovery time of the VF.
This performance difference may be noticed during a Live Partition Mobility
migration of a partition or during vNIC (Virtual Network Interface Controller)
failovers where many resets of VFs are occurring.
* A problem was fixed for SR-IOV adapters having an SRC B400FF04 logged when
a VF is reset. This is an infrequent issue and can occur for a Live Partition
Mobility migration of a partition or during vNIC (Virtual Network Interface
Controller) failovers where many resets of VFs are occurring. This error is
recovered automatically with no impact on the system.
* A problem was fixed for initial configuration of SR-IOV adapter VFs with
certain configuration settings for the following Feature Codes and CCINs:
#EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN
2CE; and #EC66/EC67 with CCIN 2CF3.
These VFs may then fail following an adapter restart, with other VFs
functioning normally. The error causes the VF to fail with an SRC B400FF04
logged.With the fix, VFs are configured correctly when created.
Because the error condition may pre-exist in an incorrectly configured
logical port, a concurrent update of this fix may trigger a logical port
failure when the VF logical port is restarted during the firmware update.
Existing VFs with the failure condition can be recovered by dynamically
removing/adding the failed port and are automatically recovered during a system
restart.
* A problem was fixed for TPM hardware failures not causing SRCs to logged
with a call out if the system is configured in ASMI to not require TPM for the
IPL. If this error occurs, the user would not find out about it until they
needed to run with TPM on the IPL. With the fix, the error logs and
notifications will occur regardless of how the TPM is configured.
System firmware changes that affect certain systems
* On systems with an IBM i partition, a problem was fixed for a D-mode IPL
failure when using a USB DVD drive in an IBM 7226 multimedia storage
enclosure. Error logs with SRC BA16010E, B2003110, and/or B200308C can occur.
As a circumvention, an external DVD drive can be used for the D-mode IPL.
* On systems with an IBM i partition, a problem was fixed that occurs after a
Live Partition Mobility (LPM) of an IBM i partition that may cause issues
including dispatching delays and the inability to do further LPM operations of
that partition. The frequency of this problem is rare. A partition
encountering this error can be recovered with a reboot of the partition.
* On systems with Integrated Facility for Linux ( IFL) processors and
Linux-only partitions, a problem was fixed for Power Enterprise Pools (PEP) 1.0
not going back into "Compliance" when resources are moved from Server 1 to
Server 2, causing an expected "Approaching Out Of Compliance", but not
automatically going back into compliance when the resources are no longer used
on Server 1. As a circumvention, the user can do an extra "push" and "pull" of
one resource to make the Pool discover it is back in "Compliance".
* On systems with an IBM i partition in POWER9 processor compatibility mode,
a problem was fixed for an MSD in IBM i with SRCs B6000105 or B6000305 logged
when a PCIe Host Bridge (PHB) or PCIe Expansion Drawer (#EMX0) is added to the
partition. For this to occur, the adapter had to be previously assigned to a
partition (any OS) that was in POWER9 processor compatibility mode and then
removed through a DLPAR or partition shut down such that the adapter is taken
through recovery.
* On systems with an IBM i partition, a problem was fixed for a possibly
incorrect number of Memory COD (Capacity On Demand) resources shown when
gathering performance data with IBM i Collection Services. Memory resources
activated by Power Enterprise Pools (PEP) 1.0 will be missing from the data.
An error was corrected in the IBM i MATMATR option 0X01F6 that retrieves the
Memory COD information for the Collection Services.
* For systems with deconfigured cores and using the default performance and
power setting of "Dynamic Performance Mode" or "Maximum Performance Mode", a
rare problem was fixed for an incorrect voltage/frequency setting for the
processors during heavy workloads with high ambient temperature. This error
could impact power usage, expected performance, or system availability if a
processor fault occurs. This problem can be avoided by using ASMI "Power and
Performance Mode Setup" to disable "All modes" when there are cores
deconfigured in the system.
VH940_041_027 / FW940.02
02/18/20 Impact: Function Severity: HIPER
System firmware changes that affect all systems
* A problem was fixed for an HMC "Incomplete" state for a system after the
HMC user password is changed with ASMI on the service processor. This problem
can occur if the HMC password is changed on the service processor but not also
on the HMC, and a reset of the service processor happens. With the fix, the
HMC will get the needed "failed authentication" error so that the user knows to
update the old password on the HMC.
System firmware changes that affect certain systems
* HIPER/Pervasive: For systems using PowerVM NovaLink to manage
partitions, a problem was fixed for the hypervisor rejecting setting the system
to be NovaLink managed. The following error message is given: "FATAL
pvm_apd[]: Hypervisor encountered an error creating the ibmvmc device. Error
number 5." This always happens in FW940.00 and FW940.01 which prevents a
system from transitioning to be NovaLink managed at these firmware levels. If
you were successfully running as NovaLink managed already on FW930 and upgraded
to FW940, you would not experience this issue.
For more information on PowerVM Novalink, refer to the IBM Knowledge Center
article:
https://www.ibm.com/support/knowledgecenter/POWER9/p9eig/p9eig_kickoff.htm.
VH940_034_027 / FW940.01
01/09/20 Impact: Security Severity: SPE
New features and functions
* Support was added for improved security for the service processor password
policy. For the service processor, the "admin", "hmc" and "general" password
must be set on first use for newly manufactured systems and after a factory
reset of the system. The REST/Redfish interface will return an error saying the
user account is expired in these scenarios. This policy change helps to
enforce the service processor is not left in a state with a well known
password. The user can change from an expired default password to a new
password using the Advanced System Management Interface (ASMI).
* Support was added for real-time data capture for PCIe3 expansion drawer
(#EMX0) cable card connection data via resource dump selector on the HMC or in
ASMI on the service processor. Using the resource selector string of "xmfr
-dumpccdata" will non-disruptively generate an RSCDUMP type of dump file that
has the current cable card data, including data from cables and the retimers.
System firmware changes that affect all systems
* A problem was fixed for persistent high fan speeds in the system after a
service processor failover. To restore the fans to normal speed without
re-IPLing the system requires the following steps:
1) Use ASMI to perform a soft reset of the backup service processor.
2) When the backup service processor has completed its reset, use the HMC to
do an administrative failover, so that the reset service processor becomes the
primary.
3) Use ASMI to perform a soft reset on the new backup service processor.
When this has completed, system fan speeds should be back to normal.
* A problem was fixed for system hangs or incomplete states displayed by
HMC(s) with SRC B182951C logged. The hang can occur during operations that
require a memory relocation for any partition such as Dynamic Platform
Optimization (DPO), memory mirroring defragmentation, or memory guarding that
happens as part of memory error recovery during normal operations of the system.
* A problem was fixed for possible unexpected interrupt behavior for
partitions running in POWER9 processor compatibility mode. This issue can
occur during the boot of a partition running in POWER9 processor compatibility
mode with an OS level that supports the External Interrupt Virtualization
Engine (XIVE) exploitation mode. For more information on compatibility modes,
see the following two articles in the IBM Knowledge Center:
1) Processor compatibility mode overview:
https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcm.htm
2) Processor compatibility mode definitions:
https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcmdefs.htm
* A problem was fixed for an intermittent IPL failure with SRC B181E540
logged with fault signature " ex(n2p1c0) (L2FIR[13]) NCU Powerbus data
timeout". No FRU is called out. The error may be ignored and the re-IPL is
successful. The error occurs very infrequently. This is the second iteration
of the fix that has been released. Expedient routing of the Powerbus
interrupts did not occur in all cases in the prior fix, so the timeout problem
was still occurring. System firmware changes that affect certain systems
* On systems running IBM i partitions configured as Restricted I/O partitions
that are also running in either P7 or P8 processor compatibility mode, a
problem was fixed for a likely hang during boot with BA210000 and BA218000
checkpoints and error logs after migrating to FW940.00 level system firmware.
The trigger for the problem is booting IBMi partitions configured as Restricted
I/O partitions in P7 or P8 compatibility mode on FW940.00 system firmware.
Such partitions are usually configured this way so that they can be used for
live partition migration (LPM) to and
from P7/P8 systems. Without the fix, the user can do either of the following
as circumventions for the boot failure of the IBM i partition:
1) Move the partition to P9 compatibility mode
2) Or remove the 'Restricted I/O Partition' property VH940_027_027 / FW940.00
11/25/19 Impact: New Severity: New
GA Level with key features included listed below
* All features and fixes from the FW930.11. service pack (and below) are
included in this release. At the time of the FW940.00 release, the FW930.11 is
a future FW930 service pack scheduled for the fourth quarter of 2019. New
Features and Functions
* User Mode NX Accelerator Enablement for PowerVM. This enables the access
of NX accelerators such as the gzip engine through user mode interfaces. The
IBM Virtual HMC (vHMC) 9.1.940 provides a user interface to this feature. The
LPAR must be running in POWER9 compatibility mode to use this feature. For
more information on compatibility modes, see the following two articles in the
IBM Knowledge Center:
1) Processor compatibility mode overview:
https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcm.htm
2) Processor compatibility mode definitions:
https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcmdefs.htm
* Support for SR-IOV logical ports in IBM i restricted I/O mode.
* Support for user mode enablement of the External Interrupt Virtualization
Engine (XIVE). This user mode enables the management of interrupts to move
from the hypervisor to the operating system for improved efficiency. Operating
systems may also have to be updated to enable this support. The LPAR must be
running in POWER9 compatibility mode to use this feature. For more information
on compatibility modes, see the following two articles in the IBM Knowledge
Center:
1) Processor compatibility mode overview:
https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcm.htm
2) Processor compatibility mode definitions:
https://www.ibm.com/support/knowledgecenter/POWER9/p9hc3/p9hc3_pcmdefs.htm
* Extended support for PowerVM Firmware Secure Boot. This feature restricts
access to the Open Firmware prompt and validates all adapter boot driver code.
Boot adapters, or adapters which may be used as boot adapters in the future,
must be updated to the latest microcode from IBM Fix Central. The latest
microcode will ensure the adapters support the Firmware Secure Boot feature of
Power Systems. This requirement applies when updating system firmware from a
level prior to FW940 to levels FW940 and later. The latest adapter microcode
levels include signed boot driver code. If a boot-capable PCI adapter is not
installed with the latest level of adapter microcode, the partition which owns
the adapter will boot, but error logs with SRCs BA5400A5 or BA5400A6 will be
posted. Once the adapter(s) are updated, the error logs will no longer be
posted.
* Linux OS support was added for PowerVM LPARs for the PCIe4 2x100GbE
ConnectX-5 RoCE adapter with feature codes of #EC66/EC67 and CCIN 2CF3. Linux
versions RHEL 7.5 and SLES 12.3 are supported.
System firmware changes that affect all systems
* A problem was fixed for incorrect call outs for PowerVM hypervisor
terminations with SRC B7000103 logged. With the fix, the call outs are changed
from SVCDOCS, FSPSP04, and FSPSP06 to FSPSP16. When this type of termination
occurs, IBM support requires the dumps be collected to determine the cause of
failure.
* A problem was fixed for an IPL failure with the following possible SRCs
logged: 11007611, 110076x1, 1100D00C, and 110015xx. The service processor may
reset/reload for this intermittent error and end up in the termination state.
4.0 How to Determine The Currently Installed Firmware Level
You can view the server's current firmware level on the Advanced System
Management Interface (ASMI) Welcome pane. It appears in the top right corner.
Example: VH920_123.
----------------------------------------------------------------------------------
5.0 Downloading the Firmware Package
Follow the instructions on Fix Central. You must read and agree to the
license agreement to obtain the firmware packages.
Note: If your HMC is not internet-connected you will need to download the new
firmware level to a USB flash memory device or ftp server.
----------------------------------------------------------------------------------
6.0 Installing the Firmware
The method used to install new firmware will depend on the release level of
firmware which is currently installed on your server. The release level can be
determined by the prefix of the new firmware's filename.Example: VHxxx_yyy_zzz
Where xxx = release level
* If the release level will stay the same (Example: Level VH920_040_040 is
currently installed and you are attempting to install level VH920_041_040) this
is considered an update.
* If the release level will change (Example: Level VH900_040_040 is currently
installed and you are attempting to install level VH920_050_050) this is
considered an upgrade. Instructions for installing firmware updates and
upgrades can be found at
https://www.ibm.com/support/knowledgecenter/9080-M9S/p9eh6/p9eh6_updates_sys.htm
IBM i Systems:
For information concerning IBM i Systems, go to the following URL to access
Fix Central:
http://www-933.ibm.com/support/fixcentral/
Choose "Select product", under Product Group specify "System i", under
Product specify "IBM i", then Continue and specify the desired firmware PTF
accordingly.
HMC and NovaLink Co-Managed Systems:
A co-managed system is managed by HMC and NovaLink, with one of the interfaces
in the co-management master mode.
Instructions for installing firmware updates and upgrades on systems
co-managed by an HMC and Novalink is the same as above for a HMC managed
systems since the firmware update must be done by the HMC in the co-management
master mode. Before the firmware update is attempted, one must be sure that
HMC is set in the master mode using the steps at the following IBM
KnowledgeCenter link for NovaLink co-managed systems:
https://www.ibm.com/support/knowledgecenter/9009-22A/p9eig/p9eig_kickoff.htm
Then the firmware updates can proceed with the same steps as for the HMC
managed systems except the system must be powered off because only a disruptive
update is allowed. If a concurrent update is attempted, the following error
will occur: " HSCF0180E Operation failed for ().
The operation failed. E302F861 is the error code:"
https://www.ibm.com/support/knowledgecenter/9009-22A/p9eh6/p9eh6_updates_sys.htm
(https://www.ibm.com/support/knowledgecenter/8247-21L/p8ha1/updupdates.htm)
7.0 Firmware History
The complete Firmware Fix History (including HIPER descriptions) for this
Release level can be reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html