VH930
For Impact, Severity and other Firmware definitions, Please
refer to the below 'Glossary of firmware terms' url:
http://www14.software.ibm.com/webapp/set2/sas/f/power5cm/home.html#termdefs
The
complete Firmware Fix History for
this
Release Level can be
reviewed at the following url:
http://download.boulder.ibm.com/ibmdl/pub/software/server/firmware/VH-Firmware-Hist.html
|
VH930_139_035 / FW930.41
5/25/21 |
Impact: Availability
Severity: HIPER
System firmware changes that affect all systems
- HIPER/Pervasive: A
problem was fixed to be able to detect a failed PFET
sensing circuit in a core at runtime, and prevent a system fail with an
incomplete state when a core fails to wake up. The failed core is
detected on the subsequent IPL. With the fix. a core is called out with
the PFET failure with SRC BC13090F and hardware description "CME
detected malfunctioning of PFET headers." to isolate the error better
with a correct callout.
- HIPER/Pervasive: A
problem was fixed for unrecoverable (UE) SRCs B150BA40 and B181BE12
being logged for a Hostboot TI (due to no actual fault), causing nodes
to be deconfigured and the system to re-IPL with reduced resources. The
problem can be triggered during a firmware upgrade or disruptive
firmware update. The problem can also occur on the first IPL after a
concurrent firmware update. The problem can also occur outside of a
firmware update scenario for some reconfiguration loops that can happen
in Hostboot. There is also a visible callout indicating one or more
nodes/backplanes have a problem which can lead to unnecessary repairs.
- HIPER/Pervasive:
A problem was fixed for a checkstop due to an internal Bus transport
parity error or a data timeout on the Bus. This is a very rare
problem that requires a particular SMP transport link traffic pattern
and timing. Both the traffic pattern and timing are very
difficult to achieve with customer application workloads. The fix will
have no measurable effect on most customer workloads although highly
intensive OLAP-like workloads may see up to 2.5% impact.
|
VH930_134_035 / FW930.40
3/10/21 |
Impact: Availability
Severity: HIPER
New features and functions
- Added
support in ASMI for a new panel to do Self -Boot Engine (SBE)
SEEPROM validation. This validation can only be run at the
service processor standby state.
If the validation
detects a problem, IBM recommends the system not be used and that IBM
service be called.
System firmware changes that affect all systems
- HIPER/Pervasive:
DEFERRED: A problem was fixed for a system checkstop with
an SRC BC14E540 logged that can occur during certain SMP cable failure
scenarios. A re-IPL of the system is needed to activate this fix.
- HIPER/Pervasive: A
problem was fixed for soft error recovery not working in
the DPSS (Digital Power Subsystem Sweep) programmable power controller
that results in the DPSS being called out as a failed FRU.
However, the DPSS is recovered on the next IPL of the system.
There is no impact to the running system as there is a failover to a
backup DPSS and the system continues running.
- A problem was fixed for the On-Chip Controller (OCC) going
into safe mode (causes loss of processor performance) with SRC BC702616
logged. This problem can be triggered by the loss of a power
supply (an oversubscription event). The problem can be
circumvented by fixing the issue with the power supply.
- A problem was fixed for certain SR-IOV adapters that have a
rare, intermittent error with B400FF02 and B400FF04 logged, causing a
reboot of the VF. The error is handled and recovered without any
user intervention needed. The SR-IOV adapters affected have the
following Feature Codes and CCINs: #EC2R/#EC2S with CCIN 58FA;
#EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN 2CE; and #EC66/#EC67
with CCIN 2CF3.
- A problem was fixed for certain Power Interface Board (PIB)
errors with BC200D01 logged not causing a callout and deconfiguration
of the failing FRU. Instead, SRC BC8A1703 followed and then SRC
B150BA3C, calling out the system backplane.
This problem can result in an entire node failing to IPL instead of
just having the failing FRU deconfigured.
- A problem was fixed for not logging SRCs for certain cable
pulls from the #EMXO PCIe expansion drawer. With the fix, the
previously undetected cable pulls are now detected and logged with SRC
B7006A8B and B7006A88 errors.
- A problem was fixed for a rare system hang with SRC
BC70E540 logged that may occur when adding processors through licensing
or the system throttle state changing (becoming throttled or
unthrottled) on an Enterprise Pool system. The trigger for the
problem is a very small timing window in the hardware as the processor
loads are changing.
- A problem was fixed for the Systems Management Services (
SMS) menu "Device IO Information" option being incorrect when
displaying the capacity for an NVMe or Fibre Channel (FC) NVMe disk.
This problem occurs every time the data is displayed.
- A problem was fixed for an unrecoverable UE SRC B181BE12
being logged if a service processor message acknowledgment is sent to a
Hostboot instance that has already shutdown. This is a harmless
error log and it should have been marked as an informational log.
- A problem was fixed for Time of Day (TOD) being lost for
the real-time clock (RTC) when the system initializes from AC power off
to service processor standby state with an SRC B15A3303 logged.
This is a very rare problem that involves a timing problem in the
service processor kernel that can be recovered by setting the system
time with ASMI.
- A problem was fixed for intermittent failures for a reset
of a Virtual Function (VF) for SR-IOV adapters during Enhanced Error
Handling (EEH) error recovery. This is triggered by EEH events at
a VF level only, not at the adapter level. The error recovery
fails if a data packet is received by the VF while the EEH recovery is
in progress. A VF that has failed can be recovered by a partition
reboot or a DLPAR remove and add of the VF.
- A problem was fixed for performance degradation of a
partition due to task dispatching delays. This may happen when a
processor chip has all of its shared processors removed and converted
to dedicated processors. This could be driven by DLPAR remove of
processors or Dynamic Platform Optimization (DPO).
- The following problems were fixed for certain SR-IOV
adapters:
1) An error was fixed that occurs during VNIC failover where the VNIC
backing device has a physical port down with an SRC B400FF02 logged.
2) A problem was fixed for adding a new logical port that has a PVID
assigned that is causing traffic on that VLAN to be dropped by other
interfaces on the same physical port which uses OS VLAN tagging for
that same VLAN ID. This problem occurs each time a logical port
with a non-zero PVID that is the same as an existing VLAN is
dynamically added to a partition or is activated as part of a partition
activation, the traffic flow stops for other partitions with OS
configured VLAN devices with the same VLAN ID. This problem can
be recovered by configuring an IP address on the logical port with the
non-zero PVID and initiating traffic flow on this logical port.
This problem can be avoided by not configuring logical ports with a
PVID if other logical ports on the same physical port are configured
with OS VLAN devices.
This fix updates the adapter firmware to 11.4.415.36 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for incomplete periodic data gathered
by IBM Service for #EMXO PCIe expansion drawer predictive error
analysis. The service data is missing the PLX (PCIe switch) data that
is needed for the debug of certain errors.
- A problem was fixed for a partition hang in shutdown with
SRC B200F00F logged. The trigger for the problem is an
asynchronous NX accelerator job (such as gzip or NX842 compression) in
the partition that fails to clean up successfully. This is
intermittent and does not cause a problem until a shutdown of the
partition is attempted. The hung partition can be recovered by
performing an LPAR dump on the hung partition. When the dump has
been completed, the partition will be properly shut down and can then
be restarted without any errors.
System firmware changes that affect certain systems
- On systems with an IBM i partition, a problem was fixed for
only seeing 50% of the total Power Enterprise Pools (PEP) 1.0 memory
that is provided. This happens when querying resource information
via QAPMCONF which calls MATMATR 0x01F6. With the fix, an error
is corrected in the IBM i MATMATR option 0X01F6 that retrieves the
memory information for the Collection Services.
|
VH930_116_035 / FW930.30
10/21/20 |
Impact: Data
Severity: HIPER
New features and functions
- DEFERRED: Host
firmware support for anti-rollback protection. This feature
implements firmware anti-rollback protection as described in NIST SP
800-147B "BIOS Protection Guidelines for Servers". Firmware is
signed with a "secure version". Support added for a new menu in
ASMI called "Host firmware security policy" to update this secure
version level at the processor hardware. Using this menu, the
system administrator can enable the "Host firmware secure version
lock-in" policy, which will cause the host firmware to update the
"minimum secure version" to match the currently running firmware. Use
the "Firmware Update Policy" menu in ASMI to show the current "minimum
secure version" in the processor hardware along with the "Minimum code
level supported" information. The secure boot verification process will
block installing any firmware secure version that is less than the
"minimum secure version" maintained in the processor hardware.
Prior to enabling the "lock-in" policy, it is recommended to accept the
current firmware level.
WARNING: Once lock-in is enabled and the system is booted, the "minimum
secure version" is updated and there is no way to roll it back to allow
installing firmware releases with a lesser secure version.
- Enable periodic logging of internal component operational
data for the PCIe3 expansion drawer paths. The logging of this
data does not impact the normal use of the system.
System firmware changes that affect all systems
- HIPER/Pervasive: A
problem was fixed for certain SR-IOV adapters for a condition that may
result from frequent resets of adapter Virtual Functions (VFs), or
transmission stalls and could lead to potential undetected data
corruption.
The following additional fixes are also included:
1) The VNIC backing device goes to a powered off state during a VNIC
failover or Live Partition Mobility (LPM) migration. This failure
is intermittent and very infrequent.
2) Adapter time-outs with SRC B400FF01 or B400FF02 logged.
3) Adapter time-outs related to adapter commands becoming blocked with
SRC B400FF01 or B400FF02 logged
4) VF function resets occasionally not completing quickly enough
resulting in SRC B400FF02 logged.
This fix updates the adapter firmware to 11.4.415.33 for the following
Feature Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- DEFERRED: A
problem was fixed for a slow down in PCIe adapter performance or loss
of adapter function caused by a reduction in interrupts available to
service the adapter. This problem can be triggered over time by
partition activations or DLPAR adds of PCIe adapters to a
partition. This fix must be applied and the system re-IPLed for
existing adapter performance problems to be resolved.
- DEFERRED: A
problem was fixed for system UPIC cable validation not being able to
detect cross-plugged UPIC cables. If the cables are plugged
incorrectly and there is a need for service, modifying the wrong FRU
locations can have adverse effects on the system, including system
outage. If the system firmware level is earlier than FW940.00,
the UPIC cables cannot be manually verified when the system power is
on. The cable status that is displayed is the result of the last cable
validation that occurred. Cable validation occurs automatically during
system power on. For this fix to take effect, the system
must be powered off and then re-IPLed.
- A rare problem was fixed for a checkstop during an IPL that
fails to isolate and guard the problem core. An SRC is logged
with B1xxE5xx and an extended hex word 8 xxxxDD90. With the fix,
the suspected failing hardware is guarded and a node is possibly
deconfigured to allow the subsequent IPLs of the system to be
successful.
- A problem was fixed to allow quicker recovery of PCIe links
for the #EMXO PCIe expansion drawer for a run-time fault with B7006A22
logged. The time for recovery attempts can exceed six minutes on
rare occasions which may cause I/O adapter failures and failed
nodes. With the fix, the PCIe links will recover or fail faster
(in the order of seconds) so that redundancy in a cluster configuration
can be used with failure detection and failover processing by other
hosts, if available, in the case where the PCIe links fail to recover.
- A problem was fixed for system memory not returned after
create and delete of partitions, resulting in slightly less memory
available after configuration changes in the systems. With the fix, an
IPL of the system will recover any of the memory that was orphaned by
the issue.
- A problem was fixed for certain SR-IOV adapters that do not
support the "Disable Logical Port" option from the HMC but the HMC was
allowing the user to select this, causing incorrect operation.
The invalid state of the logical port causes an "Enable Logical Port"
to fail in a subsequent operation. With the fix, the HMC provides
the message that the "Disable Logical Port" is not supported for the
adapter. This affects the adapters with the following Feature
Codes and CCINs: #EN15/#EN16 with CCIN 2CE3, #EN17/#EN18 with
CCIN 2CE4, #EN0H/#EN0J with CCIN 2B93, #EN0M/#EN0N with CCIN 2CC0, and
#EN0K/#EN0L with CCIN 2CC1.
- A problem was fixed for SR-IOV adapters having an SRC
B400FF04 logged when a VF is reset. This is an infrequent issue
and can occur for a Live Partition Mobility migration of a partition or
during vNIC (Virtual Network Interface Controller) failovers where many
resets of VFs are occurring. This error is recovered
automatically with no impact on the system.
- A problem was fixed to remove unneeded resets of a Virtual
Function (VF) for SR-IOV adapters, providing for improved performance
of the startup or recovery time of the VF. This performance difference
may be noticed during a Live Partition Mobility migration of a
partition or during vNIC (Virtual Network Interface Controller)
failovers where many resets of VFs are occurring.
- A problem was fixed for TPM hardware failures not causing
SRCs to logged with a call out if the system is configured in ASMI to
not require TPM for the IPL. If this error occurs, the user would not
find out about it until they needed to run with TPM on the IPL. With
the fix, the error logs and notifications will occur regardless of how
the TPM is configured.
- A problem was fixed for clock card errors not being called
out in the error log when the primary clock card fails. This problem
makes it more difficult for the system user to be aware that clock card
redundancy has been lost, and that service is needed to restore the
redundancy.
- A problem was fixed for PCIe resources under a deconfigured
PCIe Host Bridge (PHB) being shown on the OS host as available
resources when they should be shown as deconfigured. While this
fix can be applied concurrently, a re-IPL of the system is needed to
correct the state of the PCIe resources if a PHB had already been
deconfigured.
- A problem was fixed for the REST/Redfish interface to
change the success return code for object creation from "200" to
"201". The "200" status code means that the request was received
and understood and is being processed. A "201" status code
indicates that a request was successful and, as a result, a resource
has been created. The Redfish Ruby Client, "redfish_client" may
fail a transaction if a "200" status code is returned when "201" is
expected.
- A problem was fixed for a concurrent maintenance "Repair
and Verify" (R&V) operation for a #EMX0 fanout module that fails
with an "Unable to isolate the resource" error message. This
should occur only infrequently for cases where a physical hardware
failure has occurred which prevents access to slot power
controls. This problem can be worked around by bringing up the
"PCIe Hardware Topology" screen from either ASMI or the HMC after the
hardware failure but before the concurrent repair is attempted.
This will avoid the problem with the PCIe slot isolation
These steps can also be used to recover from the error to allow the
R&V repair to be attempted again.
- A problem was fixed for a rare system hang that can occur
when a page of memory is being migrated. Page migration (memory
relocation) can occur for a variety of reasons, including predictive
memory failure, DLPAR of memory, and normal operations related to
managing the page pool resources.
- A problem was fixed for utilization statistics for commands
such as HMC lslparutil and third-party lpar2rrd that do not accurately
represent CPU utilization. The values are incorrect every time
for a partition that is migrated with Live Partition Mobility
(LPM). Power Enterprise Pools 2.0 is not affected by this
problem. If this problem has occurred, here are three possible
recovery options:
1) Re-IPL the target system of the migration.
2) Or delete and recreate the partition on the target system.
3) Or perform an inactive migration of the partition. The cycle
values get zeroed in this case.
- A problem was fixed for running PCM on a system with SR-IOV
adapters in shared mode that results in an "Incomplete" system state
with certain hypervisor tasks deadlocked. This problem is rare and is
triggered when using SR-IOV adapters in shared mode and gathering
performance statistics with PCM (Performance Collection and Monitoring)
and also having a low level error on an adapter. The only way to
recover from this condition is to re-IPL the system.
- A problem was fixed for IBM Power Enterprise Pools 2.0 (PEP
2.0) where the HMC "Throttling" flag, once activated, stays on even
after the system is back in compliance. The problem is triggered
if PEP 2.0 has expired or is out of compliance. When the system
is back in compliance, the throttling stops but the HMC still displays
"Throttling" as "ON". This problem is extremely infrequent as
normal PEP 2.0 usage should not need throttling. The "Throttling
" flag will get turned off if the system gets an updated PEP 2.0
renewal key.
- A problem was fixed for an enhanced PCIe expansion drawer
FPGA reset causing EEH events from the fanout module or cable cards
that disrupt the PCIe lanes for the PCIe adapters. This problem
affects systems with the PCIe expansion drawer enhanced fanout module
(#EMXH) and the enhanced cable card (#EJ19).
The error is associated with the following SRCs being logged:
B7006A8D with PRC 37414123 (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::LocalFpgaHwReset)
B7006A8E with PRC 3741412A (XmPrc::XmCCErrMgrBearPawPrime |
XmPrc::RemoteFpgaHwReset)
If the EEH errors occur, the OS device drivers automatically recover
but with a reset of affected PCIe adapters that would cause a brief
interruption in the I/O communications.
- A problem was fixed for the FRU callout lists for SRCs
B7006A2A and B7006A2B possibly not including the FRU containing the
PCIe switch as the second FRU in the callout list. The
card/drive in the slot is the first callout and the FRU containing the
PCIe switch should be the second FRU in the callout list. This
problem occurs when the PCIe slot is on a different planar that the
PCIe switch backing the slot. This impacts the NVMe backplanes
(P2 with slots C1-C4) hosting the PCIe backed SSD NVMe U.2 modules that
have feature codes #EC5J and #EC5K. As a workaround for B7006A2A
and B7006A2B errors where the callout FRU list is processed and the
problem is not resolved, consider replacing the backplane (which
includes the PCIe switch) if this was omitted in the FRU callout list.
- A problem was fixed for a PCIe3 expansion drawer cable that
has hidden error logs for a single lane failure. This happens whenever
a single lane error occurs. Subsequent lane failures are not
hidden and have visible error logs. Without the fix, the hidden or
informational logs would need to be examined to gather more information
for the failing hardware.
- A problem was fixed for mixing modes on the ports of SR-IOV
adapters that causes SRCs B200A161, B200F011, B2009014 and B400F104 to
be logged on boot of the failed adapter. This error happens when
one port of the adapter is changed to option 1 with a second port set
at either option 0 or option 2. The error can be cleared by
taking the adapter out of SR-IOV shared mode. The SR-IOV adapters
affected have the following Feature Codes and CCINs: #EC2R/#EC2S with
CCIN 58FA; #EC2T/#EC2U with CCIN 58FB; #EC3L/#EC3M with CCIN
2CE; and #EC66/#EC67 with CCIN 2CF3.
- A problem was fixed for a partition configured with a large
number (approximately 64) of Virtual Persistent Memory (PMEM) LUNs
hanging during the partition activation with a CA00E134 checkpoint SRC
posted. Partitions configured with approximately 64 PMEM LUNs
will likely hang and the greater the number of LUNs, the greater the
possibility of the hang. The circumventionf or this problem is to
reduce the number of PMEM LUNs to 64 or less in order to boot
successfully. The PMEM LUNs are also known as persistent memory
volumes and can be managed using the HMC. For more information on
this topic, refer to https://www.ibm.com/support/knowledgecenter/POWER9/p9efd/p9efd_lpar_pmem_settings.htm.
- A problem was fixed for non-optimal On-Chip Controller
(OCC) processor frequency adjustments when system power limits or user
power caps are exceeded. When a workload causes power limits or
caps to be exceeded, there can be large frequency swings for the
processors and a processor chip can get stuck at minimum
frequency. With the fix, the OCC now waits for new power readings
when changing the processor frequency and uses a master power capping
frequency to keep all processors at the same frequency. As a
workaround for this problem, do not set a power cap or run a workload
that would exceed the system power limit.
- A problem was fixed for mixing memory DIMMs with different
timings (different vendors) under the same memory controller that fail
with an SRC BC20E504 error and DIMMs deconfigured. This is an
"MCBIST_BRODCAST_OUT_OF_SYNC" error. The loss of memory DIMMs can
result in a IPL failure. This problem can happen if the memory
DIMMs have a certain level of timing differences. If the timings
are not compatible, the failure will occur on the IPL during the memory
training. To circumvent this problem, each memory controller should
have only memory DIMMs from the same vendor plugged.
- A problem was fixed for the Self Boot Engine (SBE) going to
termination with an SRC B150BA8D logged when booting on a bad
core. Once this happens, this error will persist as the bad core is not
deconfigured. To recover from this error and be able to IPL, a failover
can be done to the backup service processor and IPL from there. With
the fix, the failing core is deconfigured and the SBE is reconfigured
to use another core so the system can IPL.
- A problem was fixed for guard clearing where a specific
unguard action may cause other unrelated predictive and manual guards
to also be cleared.
- A problem was fixed for an infrequent issue after a Live
Partition Mobility (LPM) operation from a POWER9 system to a POWER8 or
POWER7 system. The issue may cause unexpected OS behavior, which may
include loss of interrupts, device time-outs, or delays in
dispatching. Rebooting the affected target partition will resolve
the problem.
- A problem was fixed for a partition crash or hang following
a partition activation or a DLPAR add of a virtual processor. For
partition activation, this issue is only possible for a system with a
single partition owning all resources. For DLPAR add, the issue
is extremely rare.
- A problem was fixed for a DLPAR remove of memory from a
partition that fails if the partition contains 65535 or more
LMBs. With 16MB LMBs, this error threshold is 1 TB of memory.
With 256 MB LMBs, it is 16 TB of memory. A reboot of the partition
after the DLPAR will remove the memory from the partition.
- A problem was fixed for incorrect run-time deconfiguration
of a processor core with SRC B700F10B. This problem can be circumvented
by a reconfiguration of the processor core but this should only be done
with the guidance of IBM Support to ensure the core is good.
- A problem was fixed for Live Partition Mobility (LPM) being
shown as enabled at the OS when it has been disabled by the ASMI
command line using the server processor command of "cfcuod -LPM
OFF". LPM is actually disabled and the status shows correctly on
the HMC. The status on the OS can be ignored (for example as
shown by the AIX command "lparstat -L") as LPM will not be
allowed to run when it is disabled.
- A problem was fixed for an IPL failure with SRC BC10E504
logged. This may occur if there is a deconfiguration of the first
node in the system (node 0) or a deconfiguration of processor chip 0 in
the first node. The workaround to this problem is to change the
hardware configuration to ensure that the first processor chip in the
first node is configured.
- A problem was fixed for a VIOS, AIX, or Linux partition
hang during an activation at SRC CA000040. This will occur on a
system that has been running more than 814 days when the boot of the
partition is attempted if the partitions are in POWER9_base or POWER9
processor compatibility mode.
A workaround to this problem is to re-IPL the system or to change the
failing partition to POWER8 compatibility mode.
- A problem was fixed for performance tools perfpmr, tprof
and pex that may not be able to collect data for the event-based
options.
This can occur any time an OS thread becomes idle. When the
processor cores are assigned to the next active process, the
performance registers may be disabled.
- A problem was fixed for a system hang and HMC "Incomplete"
state that may occur when a partition hangs in shutdown with SRC
B200F00F logged. The trigger for the problem is an asynchronous NX
accelerator job (such as gzip) in the partition that fails to clean up
successfully. This is intermittent and does not cause a problem until a
shutdown of the partition is attempted.
- A problem was fixed for an SRC B7006A99 informational log
now posted as a Predictive with a call out of the CXP cable FRU.
This fix improves FRU isolation for cases where a CXP cable alert
causes a B7006A99 that occurs prior to a B7006A22 or B7006A8B.
Without the fix, the SRC B7006A99 is informational and the latter SRCs
cause a larger hardware replacement even though the earlier event
identified a probable cause for the cable FRU.
- A problem was
fixed for a security vulnerability for the Self Boot Engine
(SBE). The SBE can be compromised from the service processor to
allow injection of malicious code. An attacker that gains root access
to the service processor could compromise the integrity of the host
firmware and bypass the host firmware signature verification process.
This compromised state can not be detected through TPM
attestation. This is Common Vulnerabilities and Exposures issue
number CVE-2021-20487.
System firmware changes that affect certain systems
- On systems with an IBM i partition, a problem was fixed for
a dedicated memory IBM i partition running in P9 processor
compatibility mode failing to activate with HSCL1552 "the firmware
operation failed with extended error". This failure only occurs
under a very specific scenario - the new amount of desired memory is
less than the current desired memory, and the Hardware Page Table (HPT)
size needs to grow.
- On systems with AIX and Linux partitions, a problem was
fixed for AIX and Linux partitions that crash or hang when reporting
any of the following Partition Firmware RTAS ASSERT rare conditions:
1) SRC BA33xxxx errors - Memory allocation and management errors.
2) SRC BA29xxxx errors - Partition Firmware internal stack errors.
3) SRC BA00E8xx errors - Partition Firmware initialization errors
during concurrent firmware update or Live Partition Mobility (LPM)
operations.
This problem should be very rare. If the problem does occur, a
partition reboot is needed to recover from the error.
|
VH930_101_035 / FW930.20
02/27/20 |
Impact:
Availability Severity: HIPER
New features and functions
- Support was added
for real-time data capture for PCIe3 expansion drawer (#EMX0) cable
card connection data via resource dump selector on the HMC or in ASMI
on the service processor. Using the resource selector string of
"xmfr -dumpccdata" will non-disruptively generate an RSCDUMP type of
dump file that has the current cable card data, including data from
cables and the retimers.
- Support was added for redundant VPD EEPROMs. If the
primary module VPD EEPROM fails, the system will automatically change
to the backup module.
System firmware changes that affect all systems
- HIPER/Pervasive:
A problem was fixed for a possible system crash and HMC "Incomplete"
state when a logical partition (LPAR) power off after a dynamic LPAR
(DLPAR) operation fails for a PCIe adapter. This scenario is
likely to occur during concurrent maintenance of PCIe adapters or for
#EMX0 components such as PCIe3 Cable adapters, Active Optical or copper
cables, fanout modules, chassis management cards, or midplanes.
The DLPAR fail can leave page table mappings active for the adapter,
causing the problems on the power down of the LPAR. If the system
does not crash, the DLPAR will fail if it is retried until a platform
IPL is performed.
- HIPER/Pervasive:
A problem was fixed for an HMC "Incomplete" state for a system after
the HMC user password is changed with ASMI on the service
processor. This problem can occur if the HMC password is changed
on the service processor but not also on the HMC, and a reset of the
service processor happens. With the fix, the HMC will get the
needed "failed authentication" error so that the user knows to update
the old password on the HMC.
- DEFERRED: A
problem was fixed
for a processor core failure with SRCs B150BA3C and BC8A090F logged
that deconfigures the entire processor for the current IPL. A
re-IPL
of the system will recover the lost processor with only the bad core
guarded.
- A problem was fixed for certain SR-IOV adapters that can
have an adapter reset after a mailbox command timeout error.
This fix updates the adapter firmware to 11.2.211.39 for the
following Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3,
#EN17/EN18 with CCIN 2CE4, #EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with
CCIN 2CC0, and #EN0K/EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is first set to SR-IOV shared mode, the adapter firmware is
updated to the latest level available with the system firmware (and it
is also updated automatically during maintenance operations, such as
when the adapter is stopped or replaced). And lastly, selective
manual updates of the SR-IOV adapters can be performed using the
Hardware Management Console (HMC). To selectively update the
adapter firmware, follow the steps given at the IBM Knowledge Center
for using HMC to make the updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for an SR-IOV adapter failure with
B400FFxx errors logged when moving the adapter to shared mode.
This is an infrequent race condition where the adapter is not yet ready
for commands and it can also occur during EEH error recovery for the
adapter. This affects the SR-IOV adapters with the following
feature codes and CCINs: #EC2R/EC2S with CCIN 58FA;
#EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67
with CCIN 2CF3.
- A problem was fixed for an IPL failure with the following
possible SRCs logged: 11007611, 110076x1, 1100D00C, and
110015xx. The service processor may reset/reload for this
intermittent error and end up in the termination state.
- A problem was fixed for a failed clock card causing a node
to be guarded during the IPL of a multi-node system. With the
fix, the redundant clock card allows all the nodes to IPL in the case
of a single clock card failure.
- A problem was fixed for the green power LED on the System
Control Unit (SCU) not being lit even though the system is
powered on. Without the fix, the LED is always in the off state.
- A problem was fixed for delayed interrupts on a Power9
system following a Live Partition Mobility operation from a Power7 or
Power8 system. The delayed interrupts could cause device
time-outs, program dispatching delays, or other device problems on the
target Power9 system.
- A problem was fixed for processor cores not being able to
be used by dedicated processor partitions if they were DLPAR removed
from a dedicated processor partition. This error can occur if
there was a firmware assisted dump or a Live Partition Mobility (LPM)
operation after the DLPAR of the processor. A re-IPL of the
system will recover the processor cores.
- A problem was fixed for a B7006A96 fanout module FPGA
corruption error that can occur in unsupported PCIe3 expansion
drawer(#EMX0) configurations that mix an enhanced PCIe3 fanout module
(#EMXH) in the same drawer with legacy PCIe3 fanout modules (#EMXF,
#EMXG, #ELMF, or #ELMG). This causes the FPGA on the enhanced
#EMXH to be updated with the legacy firmware and it becomes a
non-working and unusable fanout module. With the fix, the
unsupported #EMX0 configurations are detected and handled gracefully
without harm to the FPGA on the enhanced fanout modules.
- A problem was fixed for lost interrupts that could cause
device time-outs or delays in dispatching a program process. This
can occur during memory operations that require a memory relocation for
any partition such as mirrored memory defragmentation done by the HMC
optmem command, or memory guarding that happens as part of memory error
recovery during normal operations of the system.
- A problem was fixed for extraneous informational logging of
SRC B7006A10 ("Insufficient SR-IOV resources available") with a 1306
PRC. This SRC is logged whenever an SR-IOV adapter is moved from
dedicated mode to shared mode. This SRC with the 1306 PRC should
be ignored as no action is needed and there is no issue with SR-IOV
resources.
- A problem was fixed for a hypervisor error during system
shutdown where a B7000602 SRC is logged and the system may also briefly
go "Incomplete" on the HMC but the shutdown is successful. The
system will power back on with no problems so the SRC can be ignored if
it occurred during a shutdown.
- A problem was fixed for possible dispatching delays for
partitions running in POWER8, POWER9_base or POWER9 processor
compatibility mode.
- A problem was fixed for extraneous B400FF01 and B400FF02
SRCs logged when moving cables on SR-IOV adapters. This is an
infrequent error that can occur if the HMC performance monitor is
running at the same time the cables are moved. These SRCs can be
ignored when accompanied by cable movement.
System firmware changes that affect certain systems
- On systems with an
IBM i partition, a problem was fixed that occurs after a Live Partition
Mobility (LPM) of an IBM i partition that may cause issues including
dispatching delays and the inability to do further LPM operations of
that partition. The frequency of this problem is rare. A
partition encountering this error can be recovered with a reboot of the
partition.
- On systems with an IBM i partition, a problem was fixed for
a D-mode IPL failure when using a USB DVD drive in an IBM 7226
multimedia storage enclosure. Error logs with SRC BA16010E,
B2003110, and/or B200308C can occur. As a circumvention, an
external DVD drive can be used for the D-mode IPL.
- On systems with an IBM i partition, a problem was fixed for
a possibly incorrect number of Memory COD (Capacity On Demand)
resources shown when gathering performance data with IBM i Collection
Services. Memory resources activated by Power Enterprise Pools (PEP)
1.0 will be missing from the data. An error was corrected in the
IBM i MATMATR option 0X01F6 that retrieves the Memory COD information
for the Collection Services.
- On systems with Integrated Facility for Linux ( IFL)
processors and Linux-only partitions, a problem was fixed for Power
Enterprise Pools (PEP) 1.0 not going back into "Compliance" when
resources are moved from Server 1 to Server 2, causing an expected
"Approaching Out Of Compliance", but not automatically going back into
compliance when the resources are no longer used on Server 1. As
a circumvention, the user can do an extra "push" and "pull" of one
resource to make the Pool discover it is back in "Compliance".
|
VH930_093_035 / FW930.11
12/11/19 |
Impact:
Availability Severity: SPE
System
firmware changes that
affect all systems
- DEFERRED:
PARTITION_DEFERRED: A
problem was fixed for vHMC having no useable local graphics console
when installed on FW930.00 and later partitions.
- DEFERRED: A
problem was fixed for not
being able to do a HMC exchange FRU for the PCIe cassette in P1-C3 if a
PCIe to USB conversion card (CCIN 6B6C) is not installed in
P1-C13. In
this situation, the P1-C3 location is not provided in the FRU selection
list. An alternative procedure to accomplish the same task would
be to
do an exchange FRU on the PCIe adapter P1-C3-C1 in the PCIe cassette.
- DEFERRED: A
problem was fixed for rare system
checkstops triggered by SMP cable failure or when one of the cables is
not properly secured in place.
- A problem was fixed for the
Advanced
System Management Interface (ASMI) showing an "Unknown" in the
Deconfiguration records if a SMP group (SMPGROUP) unit is
guarded.
With the fix, "OBUS End Point" will be displayed instead of "Unknown".
- A problem was fixed for the
Advanced
System Management Interface (ASMI) menu for "PCIe Hardware
Topology/Reset link" showing the wrong value. This value is
always
wrong without the fix.
- A problem was fixed for a local clock
card
(LCC) failure that results in a failed service processor failover
and
a system that does not IPL or takes several hours to IPL. With
the
fix, missing local clock card data is made available to the backup
service processor so that the failover can succeed, allowing the system
to IPL.
- A problem was fixed for PLL unlock
error
with SRC B124E504 causing a secondary error of PRD Internal
Firmware
Software Fault with SRC B181E580 and incorrect FRU call outs.
- A problem was fixed for an
initialization
failure of certain SR-IOV adapter ports during its boot, causing a
B400FF02 SRC to be logged. This is a rare problem and it recovers
automatically by the reboot of the adapter on the error. This
problem
affects the SR-IOV adapters with the following feature codes and CCINs:
#EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB;
#EC3L/EC3M with
CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3.
- A problem was fixed for the SR-IOV
Virtual
Functions (VFs) when the multi-cast promiscuous flag has been turned on
for the VF. Without the fix, the VF device driver sometimes
may
erroneously fault when it senses that the multi-cast promiscuous mode
had not been achieved although it had been requested.
- A problem was fixed for SR-IOV adapters to
provide a consistent Informational message level for cable plugging
issues. For transceivers not plugged on certain SR-IOV adapters,
an
unrecoverable error (UE) SRC B400FF03 was changed to an Informational
message logged. This affects the SR-IOV adapters with the
following
feature codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U
with
CCIN 58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with
CCIN 2CF3.
For copper cables unplugged on certain SR-IOV adapters, a missing
message was replaced with an Informational message logged. This
affects the SR-IOV adapters with the following feature codes and CCINs:
#EN17/EN18 with CCIN 2CE4, and #EN0K/EN0L with CCIN 2CC1.
- A problem was fixed for
incorrect DIMM callouts
for DIMM over-temperature errors. The error log for the DIMM over
temperature will have incorrect FRU callouts, either calling out the
wrong DIMM or the wrong DIMM controller memory buffer.
- A problem was fixed
for an
Operations Panel hang after using it set LAN Console as the console
type for several iterations. After several iterations, the
operations
panel may hang with "Function 41" displayed on the operations
panel. A
hot unplug and plug of the operations panel can be used to recover it
from the hang.
- A problem was fixed for shared
processor
pools where uncapped shared processor partitions placed in a pool may
not be able to consume all available processor cycles. The
problem may
occur when the sum of the allocated processing units for the pool
member partitions equals the maximum processing units of the pool.
- A problem was fixed for
Novalink failing to activate partitions that have names with character
lengths near the maximum allowed character length. This problem
can be
circumvented by changing the partition name to have 32 characters or
less.
- A problem was fixed where a Linux or AIX partition type was
incorrectly reported as unknown. Symptoms include: IBM Cloud
Management Console (CMC) not being able to determine the RPA partition
type (Linux/AIX) for partitions that are not active; and HMC attempts
to dynamically add CPU to Linux partitions may fail with a HSCL1528
error message stating that there are not enough Integrated Facility for
Linux ( IFL) cores for the operation.
- A problem was fixed for a possible
activation code memory conversion sequence number error when creating a
Power Enterprise Pool (PEP) 1.0 Pool for a set of servers. This
can
happen if Perm Memory activations were purchased local to a server but
then needed to be converted from Perm MEM to Mobile PEP Mem state for
pool use. The deployment of the PEP fails with the following
messages
on the HMC:
1) HSCL9017 HSCL0521 A Mobile CoD memory conversion code to
convert
100 GB of permanently activated memory to Mobile CoD memory on the
managed system has been entered. The sequence number of the CoD code
indicates that this code has been used before. Obtain a new CoD code
and try again.
2) HSCL9119 The Mobile CoD memory activation code for the Power
enterprise pool was not entered because a permanent to Mobile CoD
memory conversion code for a server could not be entered.
To recover from this error, request a new XML file from IBM with
an updated Memory Conversion activation code.
- A problem was fixed for a hypervisor
hang
that can occur on the target side when doing a Live Partition Mobility
(LPM) migration from a system that does not support encryption and
compression of LPM data. If the hang occurs, the HMC will go to
an
"Incomplete" state for the target system. The problem is rare
because
the data from the source partition must be in a very specific pattern
to cause the failure. When the failure occurs, a B182951C will be
logged on the target (destination) system and the HMC for the source
partition will issue the following message: "HSCLA318 The
migration
command issued to the destination management console failed with the
following error: HSCLA228 The requested operation cannot be performed
because the managed system <system identifier> is not in the
Standby or Operating state.". To recover, the target system must
be
re-IPLed.
- A problem was fixed for performance
collection tools not collecting data for event-based options.
This fix
pertains to perfpmr and tprof on AIX, and Performance Explorer (PEX) on
IBM i.
- A problem was fixed for a SRC
reminder
that keeps repeating for B150F138 even after a UPIC cable has
been
repaired or replaced. Without the fix, a hot-plug of a UPIC
cable,
while the system is running, will not get verified as the cable being
fixed until the system is re-IPLed, so the initial error SRC for the
missing or bad UPIC cable will be posted repeatedly until the re-IPL
occurs.
- A problem was fixed a Live
Partition
Mobility (LPM) migration of a large memory partition to a target system
that causes the target system to crash and for the HMC to go to the
"Incomplete" state. For servers with the default LMB size
(256MB), if
the partition is >=16TB and if the desired memory is different than
the maximum memory, LPM may fail on the target system. Servers
with
LMB sizes less than the default could hit this problem with smaller
memory partition sizes. A circumvention to the problem is to set
the
desired and maximum memory to the same value for the large memory
partition that is to be migrated.
- A problem was fixed for certain
SR-IOV adapters with the following issues:
1) If the SR-IOV logical port's VLAN ID (PVID) is modified while the
logical port is configured, the adapter will use an incorrect PVID for
the Virtual Function (VF). This problem is rare because most
users do
not change the PVID once the logical port is configured, so they will
not have the problem.
2) Adapters with an SRC of B400FF02 logged.
This fix updates the adapter firmware to 11.2.211.38 for the
following
Feature Codes and CCINs: #EN15/EN16 with CCIN 2CE3, #EN17/EN18
with
CCIN 2CE4, #EN0H/EN0J with CCIN 2B93, #EN0M/EN0N with CCIN 2CC0,
and #EN0K/EN0L with CCIN 2CC1.
The SR-IOV adapter firmware level update for the shared-mode adapters
happens under user control to prevent unexpected temporary outages on
the adapters. A system reboot will update all SR-IOV shared-mode
adapters with the new firmware level. In addition, when an
adapter is
first set to SR-IOV shared mode, the adapter firmware is updated to the
latest level available with the system firmware (and it is also updated
automatically during maintenance operations, such as when the adapter
is stopped or replaced). And lastly, selective manual updates of
the
SR-IOV adapters can be performed using the Hardware Management Console
(HMC). To selectively update the adapter firmware, follow the
steps
given at the IBM Knowledge Center for using HMC to make the
updates: https://www.ibm.com/support/knowledgecenter/en/POWER9/p9efd/p9efd_updating_sriov_firmware.htm.
Note: Adapters that are capable of running in SR-IOV mode, but are
currently running in dedicated mode and assigned to a partition, can be
updated concurrently either by the OS that owns the adapter or the
managing HMC (if OS is AIX or VIOS and RMC is running).
- A problem was fixed for certain SR-IOV adapters where after
some error conditions the adapter may hang with no messages or error
recovery. This is a rare problem for certain severe adapter
errors. This problem affects the SR-IOV adapters with the
following feature codes: #EC66/EC67 with CCIN 2CF3.
This problem can be recovered by removing the adapter from SR-IOV mode
and putting it back in SR-IOV mode, or the system can be re-IPLed.
- A problem was fixed for an
initialization
failure of certain SR-IOV adapters when changed into SR-IOV mode.
This
is an infrequent problem that most likely can occur following a
concurrent firmware update when the adapter also needs to be updated.
This problem affects the SR-IOV adapter with the following feature
codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN
58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN
2CF3. This
problem can be recovered by removing the adapter from SR-IOV mode and
putting it back in SR-IOV mode, or the system can be re-IPLed.
- A problem was fixed for a missing
IBM
Power Enterprise Pools 2.0 (PEP 2.0) message when the PEP 2.0
subscription expires and the performance of the system is
throttled.
Also, at the time of the throttle, the "remaining days" of the
subscription still shows as "remaining 1 day". This problem can
only
occur at least 90 days after PEP 2.0 has been started and if the PEP
2.0 subscription is allowed to expire.
- A problem was fixed for an IBM Power
Enterprise Pools 2.0 (PEP 2.0) extra performance "throttle ended"
message when performance throttling is removed. The extra
"throttle
ended" message, which shares nearly the same timestamp of the first,
can be ignored.
- A problem was fixed for a bad SMP
cable
causing a B114DA62 SRC to be logged without a correct FRU call
out.
Without the fix, IBM support may be needed to isolate the bad part that
needs to be replaced.
- A problem was fixed for a rare
SMP link
initialization failure during an IPL reported with either SRC B114DA62
during IPL or BC14E540 immediately after IPL. Without the fix, a
system re-IPL is required to recover the operation of the SMP cable.
- A problem was fixed for
persistent high
fan speeds in the system after a service processor failover. To
restore the fans to normal speed without re-IPLing the system requires
the following steps:
1) Use ASMI to perform a soft reset of the backup service processor.
2) When the backup service processor has completed its reset, use the
HMC to do an administrative failover, so that the reset service
processor becomes the primary.
3) Use ASMI to perform a soft reset on the new backup service
processor. When this has completed, system fan speeds should be
back
to normal.
- A problem was fixed for a rare
IPL failure
with SRCs BC8A090F and BC702214 logged caused by an overflow of VPD
repair data for the processor cores. A re-IPL of the system
should
recover from this problem.
- A problem was fixed for a false
memory
error that can be logged during the IPL with SRC BC70E540 with the
description "mcb(n0p0c1) (MCBISTFIR[12]) WAT_DEBUG_ATTN" but with no
hardware call outs. This error log can be ignored.
- A problem was fixed for an IPL
failure
after installing DIMMs of different sizes, causing memory access
errors. Without the fix, the memory configuration should be
restored
to only use DIMMs of the same size.
- A problem was fixed for a memory
DIMM
plugging rule violation that causes the IPL to terminate with an error
log with RC_GET_MEM_VPD_UNSUPPORTED_CONFIG IPL that calls out the
memory port but has no DIMM call outs and no DIMM deconfigurations are
done. With the fix, the DIMMs that violate the plugging rules
will be
deconfigured and the IPL will complete. Without the fix, the
memory
configuration should be restored to the prior working configuration to
allow the IPL to be successful.
- A problem was fixed for a B7006A22 Recoverable Error for
the enhanced PCIe3 expansion drawer (#EMX0) I/O drawer with fanout PCIe
Six Slot Fan Out modules (#EMXH) installed. This can occur up to two
hours after an IPL from power off. This can be a frequent
occurrence on an IPL for systems that have the PCIe Six Slot Fan Out
module (#EMXH). The error is automatically recovered at the
hypervisor level. If an LPAR fails to start after this error, a
restart of the LPAR is needed.
- A problem was fixed for degraded memory
bandwidth on systems with memory that had been dynamically repaired
with symbols to mark the bad bits.
- A problem was fixed for processor or memory
VRM
power faults that cause a guard of a node without a deconfiguration of
the node. This causes the degraded IPL to go to termination when
the
faulty node is accessed during the IPL. The bad node can be
manually
deconfigured to allow the IPL to succeed.
- A problem was fixed for
an
intermittent IPL failure with SRC B181E540 logged with fault signature
" ex(n2p1c0) (L2FIR[13]) NCU Powerbus data timeout". No FRU is
called
out. The error may be ignored since the automatic re-IPL is
successful. The error occurs very infrequently. This is the
second
iteration of the fix that has been released. Expedient routing of
the
Powerbus interrupts did not occur in all cases in the prior fix, so the
timeout problem was still occurring.
- A problem was fixed for a loss of service processor
redundancy after a failover to the backup on a Hostboot IPL
error. Although the failover is successful to the backup service
processor, the primary service processor may terminate. The
service processor can be recovered from termination by using a soft
reset from ASMI. This problem only can occur during the IPL, not
at run time. This problem is very rare and has only happened with
IBM internal testing using firmware error injection to cause a TI
during the Hostboot IPL.
System firmware changes that
affect certain systems
- On systems with PCIe3
expansion
drawers(feature code #EMX0), a problem was fixed for a concurrent
exchange of a PCIe expansion drawer cable card, although successful,
leaves the fault LED turned on.
- On systems with 16TB or more of
memory, a
problem was fixed for certain SR-IOV adapters not being able to start a
Virtual Function (VF) if "I/O Adapter Enlarged Capacity" is
enabled
and VF option 0 has been selected for the number of supported VFs
.
This problem affects the SR-IOV adapters with the following feature
codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN
58FB; #EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN
2CF3. This
problem can be circumvented by the following action: change away
from
VF option 0. VF option 1 is the default option and it will work.
- On systems with 64TB of memory,
a problem
was fixed for certain SR-IOV adapters not being able to start a Virtual
Function (VF) if "I/O Adapter Enlarged Capacity" is
enabled. This
problem affects the SR-IOV adapters with the following feature codes
and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN
58FB;
#EC3L/EC3M with CCIN 2CEC; and #EC66/EC67 with CCIN 2CF3. This
problem
can be circumvented by changing the configuration such that the adapter
does not have "I/O Adapter Enlarged Capacity" enabled.
- On systems with 16GB
huge-pages, a
problem was fixed for certain SR-IOV adapters with all or nearly all
memory assigned to them preventing a system IPL. This affects the
SR-IOV adapters with the following feature codes and CCINs: #EC2R/EC2S
with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; #EC3L/EC3M with
CCIN
2CEC; and #EC66/EC67 with CCIN 2CF3. The problem can be
circumvented
by powering off the system and turning off all the huge-page
allocations.
- On systems running IBM i
partitions, a
problem was fixed for a NVME (Non-Volatile Memory Express) device load
source not being found for an IBM i boot partition. This can
occur on
a system with multiple load source candidates when the needed load
source is not the first entry in the namespace ID list. This
problem
can be circumvented by directing the System License Internal Code
(SLIC) Bootloader to a specific namespace and bypassing a search.
- On systems with IBM i
partitions, a
problem was fixed for Live Partition Mobility (LPM) migrations that
could have incorrect hardware resource information (related to VPD) in
the target partition if a failover had occurred for the source
partition during the migration. This failover would have to occur
during the Suspended state of the migration, which only lasts about a
second, so this should be rare. With the fix, at a minimum, the
migration error will be detected to abort the migration so it can be
restarted. And at a later IBMi OS level, the fix will allow the
migration to complete even though the failover has occurred during the
Suspended state of the migration.
- On systems running IBM i partitions,
a
problem was fixed for IBM i collection services that may produce
incorrect instruction count results.
- On systems running IBM i partitions,
a
performance improvement was made for the cache memory management of
workloads that utilize heavy I/O operations.
- On systems with IBM i partitions, a
rare
problem was fixed for an intermittent failure of a DLPAR remove of an
adapter. In most cases, a retry of the operation will be
successful.
- On systems with IBM i partitions, a
problem was fixed that was allowing V7R1 to boot on or be
migrated to
POWER9 servers. As documented in the System Software maps for IBM
i
(https://www-01.ibm.com/support/docview.wss?uid=ssm1platformibmi),
V7R1
IBM i software is not supported on POWER9 servers.
- On systems with IBM i partitions, a problem was fixed for a
LPAR restart error after a DLPAR of an active adapter was performed and
the LPAR was shut down. A reboot of the system will recover the
LPAR so it will start.
|
VH930_068_035 / FW930.03
08/22/19 |
Impact:
Data
Severity: HIPER
System firmware changes that affect all systems
- HIPER/Pervasive:
A change was made to fix an intermittent
processor anomaly that may result in issues such as operating system or
hypervisor termination, application segmentation fault, hang, or
undetected data corruption. The only issues observed to date have
been operating system or hypervisor terminations.
- A problem was fixed for a very intermittent partition error
when using Live Partition Mobility (LPM) or concurrent firmware
update. For a mobility operation, the issue can result in a
partition crash if the mobility target system is FW930.00, FW930.01 or
FW930.02. For a code update operation, the partition may
hang. The recovery is to reboot the partition after the crash or
hang.
|
VH930_048_035 / FW930.02
06/28/19 |
Impact:
Availability Severity: SPE
New Features and Functions
- Support added for IBM Power Enterprise Pools
2.0. This is a new IBM Power E980 offering designed to deliver
enhanced multisystem resource sharing and by-the-minute consumption of
on-premises Power E980 compute resources to clients deploying and
managing a private cloud infrastructure. Each Power Enterprise
Pool (2.0) is monitored and managed from a Cloud Management Console in
the IBM Cloud. Capacity Credits may be purchased from IBM, an
authorized IBM Business Partner, or online through the IBM Entitled
System Support website, https://www.ibm.com/servers/eserver/ess/index.wss,
where
available. Clients may more easily identify capacity usage and
trends across their Power E980 systems in a pool by viewing
web-accessible aggregated data without spreadsheets or custom analysis
tools.
- Support was added for a new P9 six core processor with CCIN
5C33.
System firmware changes that
affect all systems
- A problem was fixed for a bad link for the PCIe3 expansion
drawer (#EMX0) I/O drawer with the clock enhancement causing a system
failure with B700F103. This error could occur during an IPL or a
concurrent add of the link hardware.
- A problem was fixed for On-Chip Controller (OCC) power
capping operation time-outs with SRC B1112AD3 that caused the system to
enter safe mode, resulting in reduced performance. The problem
only occurred when the system was running with high power consumption,
requiring the need for OCC power capping.
- A problem was fixed for the "PCIe Topology " option to get
cable information in the HMC or ASMI that was returning the wrong cable
part numbers if the PCIe3 expansion drawer (#EMX0) I/O drawer
clock enhancement was configured. If cables with the incorrect
part numbers are used for an enhanced PCIe3 expansion drawer
configuration, the hypervisor will log a B7006A20 with PRC 4152
indicating an invalid configuration - https://www.ibm.com/support/knowledgecenter/9080-M9S/p9eai/B7006A20.htm.
- A problem was fixed for a drift in the system time (time
lags and the clock runs slower than the true value of time) that occurs
when the system is powered off to the service processor standby
state. To recover from this problem, the system time must be
manually corrected using the Advanced System Management Interface
(ASMI) before powering on the system. The time lag increases in
proportion to the duration of time that the system is powered off.
|
VH930_035_035 / FW930.00
05/17/19 |
Impact:
New
Severity: New
All features and fixes from the FW920.30 service pack (and below)
are included in this release.
New Features and Functions
- Support was added to allow the FPGA soft error checking on
the PCIe I/O expansion drawer (#EMX0) to be disabled with the help of
IBM support using the hypervisor "xmsvc" macro. This new setting
will persist until it it is changed by the user or IBM support.
The effect of disabling FPGA soft error checking is to eliminate the
FPGA soft error recovery which causes a recoverable PCIe adapter
outage. Some of the soft errors will be hidden by this change but
others may have unpredictable results, so this should be done only
under guidance of IBM support.
- Support for the PCIe3 expansion drawer (#EMX0) I/O
drawer clock enhancement so that a reset of the drawer does not affect
the reference clock to the adapters so the PCIe lanes for the PCIe
adapters can keep running through an I/O drawer FPGA reset. To
use this support, new cable cards, fanout modules, and optical cables
are needed after this support is installed: PCIe Six Slot Fan out
module(#EMXH) - only allowed to be connected to converter adapter cable
card; PCIe X16 to CXP Optical or CU converter adapter for the
expansion drawer (#EJ19); and new AOC cables with feature/part number
of #ECCR/78P6567, #ECCX/78P6568, #ECCY/78P6569, and #ECCZ/78P6570.
These parts cannot be install concurrently, so a scheduled outage is
needed to complete the migration.
- Support added for RDMA Over Converged Ethernet (RoCE) for
SR-IOV adapters.
- Support added for SMS menu to enhance the I/O
information option to have "vscsi" and "network" options. The
information shown for "vscsi" devices is similar to that provided for
SAS and Fibre Channel devices. The "network" option provides
connectivity information for the adapter ports and shows which can be
used for network boots and installs.
- Support added to monitor the thermal sensors on the NVMe
SSD drives (feature codes #EC5J, #EC5K, #EC5L) and use that information
to adjust the speed of the system fans for improved cooling of the SSD
drives.
- Support added to allow integrated USB ports to be
disabled. This is available via an Advanced System Management
Interface (ASMI) menu option: "System Configuration ->
Security -> USB Policy". The USB disable policy, if selected,
does not apply to pluggable USB adapters plugged into PCIe slots such
as the 4-Port USB adapter (#EC45/#EC46), which are always enabled.
System firmware changes that
affect all systems
- A problem was fixed for a clock card failure with SRC
B158CC62 logged calling out the wrong clock card and not calling out
the cable and system backplane as needed. This fix does not add
processors to the callout but in some cases the processor has also been
identified as the cause of the clock card failure.
- A problem was fixed for a system IPLing with an invalid
time set on the service processor that causes partitions to be reset to
the Epoch date of 01/01/1970. With the fix, on the IPL, the
hypervisor logs a B700120x when the service processor real time clock
is found to be invalid and halts the IPL to allow the time and date to
be corrected by the user. The Advanced System Management
Interface (ASMI) can be used to correct the time and date on the
service processor. On the next IPL, if the time and date have not
been corrected, the hypervisor will log a SRC B7001224 (indicating the
user was warned on the last IPL) but allow the partitions to start, but
the time and date will be set to the Epoch value.
- A problem was fixed for a possible boot failure from a
ISO/IEC 13346 formatted image, also known as Universal Disk Format
(UDF).
UDF is a profile of the specification known as ISO/IEC 13346 and is an
open vendor-neutral file system for computer data storage for a broad
range of media such as DVDs and newer optical disc formats. The
failure is infrequent and depends on the image. In rare cases,
the boot code erroneously fails to find a file in the current
directory. If the boot fails on a specific image, the boot of
that image will always fail without the fix.
- A problem was fixed for broadcast bootp installs or boots
that fail with a UDP checksum error.
- A problem was fixed for failing to boot from an AIX mksysb
backup on a USB RDX drive with SRCs logged of BA210012, AA06000D, and
BA090010. The boot error does not occur if a serial console is
used to navigate the SMS menus.
|