Release notes for IBM Platform Cluster Manager 4.2.1 Fix Pack 1
Contents
Description
IBM Platform Cluster Manager 4.2.1 Fix Pack 1 offers hardware support for IBM Power Systems™. Fix Pack 1 supports IBM Power Systems S822LC and IBM Power Systems S812LC.
System requirements
Fix Pack 1 has the following system requirements.
Hardware and Firmware | Software | |
---|---|---|
Management Node | Compute Node | |
IBM Power Systems S822LC Minimum Firmware Version: OP810 1543C Note: Only PowerNV mode is supported
in Fix Pack 1.
|
Red Hat Enterprise Linux (RHEL) 7.2 Little Endian (LE) | Red Hat Enterprise Linux (RHEL) 7.2 Little Endian (LE) |
IBM Power Systems S812LC Minimum Firmware Version: OP810 1543C Note: Only PowerNV mode is supported
in Fix Pack 1.
|
Red Hat Enterprise Linux (RHEL) 7.2 Little Endian (LE) | Red Hat Enterprise Linux (RHEL) 7.2 Little Endian (LE) |
Fix Pack 1 is also supported on Power S822LC and Power S812LC compute nodes running RHEL 7.2 LE on PowerVM® LPARs where the management node is running Platform Cluster Manager 4.2.1 on x86 hardware with RHEL 7.1 installed. For installing Platform Cluster Manager on an x86 management node, refer to Cross-provisioning compute nodes.
What's new in this Fix Pack
IBM Platform Cluster Manager Version 4.2.1 Fix Pack 1 supports new hardware. New hardware that is supported in Fix Pack 1 includes IBM Power Systems S822LC and S812L, which use a BMC network for out-of-band management.
Fix Pack 1 supports Red Hat Enterprise Linux (RHEL) 7.2 Little Endian.
Fix Pack 1 supports configuring and changing the BMC IP address for Power S822LC and Power S812L nodes in a node information file. If you change the BMC IP address in the node information file when adding nodes to Platform Cluster Manager, the BMC IP address that is used by the nodes is updated during node provisioning. If you do not specify the BMC IP address in the node information file, then an IP address is automatically assigned during node provisioning.
Fix Pack 1 supports installing Platform Cluster Manager 4.2.1 Fix Pack 1 with Network Manager enabled.
- In the Web Portal, navigate to the Resources tab and select .
- Select the RHEL 7.2 stateless image profile from the list and click Modify.
- In the General tab, remove the net.ifnames=0 boot parameter by removing it from the boot parameters list. In the Boot Paramters field, delete net.ifnames=0.
- Click Save. Wait for the image profile to be updated with the changes.
- Reprovision your compute nodes.
Fix Pack 1 supports all compute node provisioning methods, including: importing a node information file, node discovery, and switch discovery.
Installing Platform Cluster Manager 4.2.1 Fix Pack 1
To install Platform Cluster Manager using the V4.2.1.1 ISO file (pcm-4.2.1.1.ppc64le.iso). For information about installing Platform Cluster Manager, see in IBM Knowledge Center. If you want to keep Network Manager enabled during Platform Cluster Manager installation, refer to Installing Platform Cluster Manager 4.2.1 Fix Pack 1 with Network Manager enabled.
- Ensure that your BMC network settings including user name and
password are correct. The settings in the ipmi table
must be the same as your actual BMC network.
# tabdump ipmi
#node,bmc,bmcport,taggedvlan,bmcid,username,password,comments,disable
"__HardwareProfile_IBM_PowerNV",,,,,,"PASSW0RD",,
"__HardwareProfile_IBM_System_p_LC",,,,,"ADMIN","admin",, - Add compute nodes using the correct network profile and hardware profile. The network profile must use a BMC network and the hardware profile must be set to IBM_System_p_LC for Power S822LC and Power S812LC hardware. If you change the BMC IP address in the network profile, then the BMC IP address that is used by the nodes is updated. For more information on adding compute nodes, refer to Adding compute nodes to a system.
Installing Platform Cluster Manager 4.2.1 Fix Pack 1 with Network Manager enabled
To keep Network Manager enabled when installing Platform Cluster Manager 4.2.1 Fix Pack 1, complete the following steps:- Enable Network Manager.
# systemctl enable NetworkManager
- Start Network Manager.
# systemctl start NetworkManager
- Ensure that Network Manager is active and enabled.
# service NetworkManager status
NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2015-11-24 01:32:58 EST; 32min ago
Main PID: 3153 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
├─3153 /usr/sbin/NetworkManager --no-daemon
⋘─3159 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid -lf /v... - Run the Platform Cluster Manager installation.
See Installing Platform Cluster Manager.
When prompted to disable Network Manager, input N.
....
Checking if SELinux is disabled... [ OK ]
Checking if Auto Update is disabled... [ OK ]
Checking if NetworkManager is disabled... [WARNING]
WARNING: NetworkManager is enabled on some runlevels.
Do you want to disable NetworkManager? (Y/N) [N]: N [ OK ] .... - After installation, verify that the DNS setting is not specified.
Open the Network Manager configuration file (NetworkManager.conf)
in the /etc/NetworkManager directory and ensure
that the dns parameter is set to: dns=none.
# cat /etc/NetworkManager/NetworkManager.conf
[main]
plugins=ifcfg-rh
dns=none
Cross-provisioning compute nodes
If you have an existing x86 environment or you are installing Platform Cluster Manager using the V4.2.1 x86 ISO file (pcm-4.2.1.x64.iso), you can provision Power S822LC and Power S812LC compute nodes. To install Platform Cluster Manager refer to Installing Platform Cluster Manager.
- Mount the Platform Cluster Manager4.2.1 Fix
Pack 1 ISO file.
# mount –o loop pcm-4.2.1.1.ppc64le.iso /mnt
- Copy the pcm-base-installer-4.2.1-1.noarch.rpm package to a temporary directory.
- Extract the pcm-base-installer-4.2.1-1.noarch.rpm package
files using the rpm2cpio command.
# cd /root/temp
# rpm2cpio pcm-base-installer-4.2.1-1.noarch.rpm | cpio -div - Copy the extracted file that is named pcm-crossdistro-tool to the /opt/pcm/libexec/ directory.
- Enable support for RHEL7.2 LE.
#pcm-crossdistro-tool -i pcm-4.2.1.1_beta.le.rhel.iso
- Ensure that the related packages are installed on the management
node.
#rpm -qa | grep xCAT-dfm*
#rpm -qa | grep ISNM-hdwr_svr-RHEL - Ensure that the necessary hardware profiles are included.
# lsdef -t group |grep HardwareProfile
__HardwareProfile_IBM_Flex_System_p (group)
__HardwareProfile_IBM_Flex_System_x (group)
__HardwareProfile_IBM_NeXtScale_M4 (group)
__HardwareProfile_IBM_PowerKVM_Guest (group)
__HardwareProfile_IBM_PowerNV (group)
__HardwareProfile_IBM_System_p_CEC (group)
__HardwareProfile_IBM_System_p_LC (group)
__HardwareProfile_IBM_System_x_M4 (group)
__HardwareProfile_IBM_iDataPlex_M4 (group)
__HardwareProfile_IPMI (group) - Log in to the Web Portal and add RHEL 7.2 ppc64le as an OS distribution. Adding an OS distribution creates a stateless and stateful image profile that is specified when compute nodes are added. See Adding OS distributions.
- Add compute nodes using the correct image profile, network profile, and hardware profile. The network profile must use a BMC network and the hardware profile must be set to IBM_System_p_LC for Power S822LC and Power S812LC hardware. Make sure that the management node can control the compute node out-of-band management network. For more information on adding compute nodes, refer to Adding compute nodes to a system.
Updates, limitations, and known issues
- Power Consumption attributes for Power S812LC and Power S822LC nodes cannot be retrieved.
- While provisioning IBM Power Systems S822LC stateless
nodes, the process hangs when trying to load nvidiafb.ko (#73854). If you are provisioning stateless nodes from the CLI, the following messages are logged before the provisioning process hangs:
In the Web Portal, the node's provisioning status remains set to provisioning.[ 185.755260] nvidiafb 0000:03:00.0: enabling device (0140 -> 0142)
[ 185.755543] nvidiafb: Device ID: 10de102d
[ 185.755629] nvidiafb: HW is currently programmed for CRT
[ 185.755844] nvidiafb: Using CRT on CRTC 0To resolve this issue, modify the boot parameters in the image profile and try provisioning the nodes again.- In the Web Portal, navigate to the Resources tab and select .
- Select the RHEL 7.2 stateless image profile from the list and click Modify.
- In the General tab, add nvidiafb as a blacklist boot parameter by including it to the end of the boot parameters list. In the Boot Paramters field, input modprobe.blacklist=nvidiafb to the end of the string after net.ifnames=0.
- Click Save. Wait for the image profile to be updated with the changes.
- Reprovision the stateless nodes with the updated image profile.
- Upgrading from Platform Cluster Manager V4.2.1 to V4.2.1.1 is not supported. To get V4.2.1.1, you must uninstall Platform Cluster Manager V4.2.1 and then install Platform Cluster Manager V4.2.1.1.
Documentation updates, limitations, and known problems are documented as individual technotes in the IBM Support Portal at http://www.ibm.com/support/entry/portal/product/platform_computing/platform_cluster_manager. As problems are discovered and resolved, IBM Support Portal updates the knowledge base. By searching the knowledge base, you can find workarounds or solutions to problems.