The IBM Cloud Manager with Openstack 4.3.0.6 cmwo 4.3.0.6 interim fix 4 Readme

Readme file for:IBM Cloud Manager with OpenStack 4.3 interim fix 4 for fix pack 6
Product/Component Release:4.3.0.6
Update Name:cmwo 4.3.0.6 interim fix 4
Fix ID:4.3.0.6-IBM-CMWO-IF004
Publication Date:2016-12-11
Last modified date:2016-12-11

Online version of the readme file:http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400003000
Important: The most current version of the readme file can always be found online.

Contents

Download location
Prerequisites and co-requisites
Known issues
Known limitations

Installation information
   Installing

List of fixes
Copyright and trademark information



Download location

Download updates for IBM Cloud Manager with OpenStack 4.3 from the following location:
http://www.ibm.com/eserver/support/fixes/

Below is a list of components, platforms, and file names that apply to this Readme file.

Fix Download for Linux
Product/Component Name:Platform:Fix:
IBM Cloud Manager with OpenstackLinux 64-bit,x86_64 Linux 64-bit,x86_64
Linux 64-bit,pSeries Linux 64-bit,pSeries
cmwo_fixpack_4.3.0.6.4.tar.gz





Prerequisites and co-requisites

Prerequisites and co-requisites


If you are using IBM Cloud Manager with OpenStack to manage z/VM hypervisor, you
need to follow z/VM service guide to apply z/VM APAR VM65753 before install this
fix pack.


Known issues


Problem
In HA env, the status for all the resources in cluster was displayed as "FAILED" or "Stopped"

Root cause
This is a known issue for pacemaker 1.1.12

Resolution
Customer can update the pacemaker to 1.1.13 with below steps if they need.

Update pacemaker based on customer has valid license on using pacemaker.
1. Manual download follow packages
pacemaker-libs-1.1.13-10.el7_2.4.x86_64.rpm
pacemaker-cli-1.1.13-10.el7_2.4.x86_64.rpm
pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64.rpm
pacemaker-1.1.13-10.el7_2.4.x86_64.rpm
corosync-2.3.4-7.el7_2.3.x86_64.rpm
corosynclib-2.3.4-7.el7_2.3.x86_64.rpm
2. Standby all controller nodes.
pcs cluster standby --all
3. Check openstack services status and wait for all service in Stopped status.
pcs status
4. Update pacemaker and corosync on each node with follow commands.
rpm -Uvh pacemaker-libs-1.1.13-10.el7_2.4.x86_64.rpm pacemaker-cli-1.1.13-10.el7_2.4.x86_64.rpm pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64.rpm pacemaker-1.1.13-10.el7_2.4.x86_64.rpm
rpm -Uvh corosync-2.3.4-7.el7_2.3.x86_64.rpm corosynclib-2.3.4-7.el7_2.3.x86_64.rpm
5. Reboot all controller nodes OS
6. After all controller nodes reboot completed, wait for 5 minutes, then unstandby all controllers
pcs cluster unstandby --all
7. Check openstack services status and wait for all service in Started status.
pcs status

Update lvm2
1. Get follow packages from RHEL7.2 ISO yum repo and copy to all controller nodes.
device-mapper-libs-1.02.107-5.el7.x86_64.rpm
device-mapper-1.02.107-5.el7.x86_64.rpm
device-mapper-event-libs-1.02.107-5.el7.x86_64.rpm
device-mapper-event-1.02.107-5.el7.x86_64.rpm
lvm2-libs-2.02.130-5.el7.x86_64.rpm
device-mapper-persistent-data-0.5.5-1.el7.x86_64.rpm
lvm2-2.02.130-5.el7.x86_64.rpm
2. Update lvm2 for each controller node
rpm -Uvh device-mapper-libs-1.02.107-5.el7.x86_64.rpm device-mapper-1.02.107-5.el7.x86_64.rpm device-mapper-event-libs-1.02.107-5.el7.x86_64.rpm device-mapper-event-1.02.107-5.el7.x86_64.rpm lvm2-libs-2.02.130-5.el7.x86_64.rpm device-mapper-persistent-data-0.5.5-1.el7.x86_64.rpm lvm2-2.02.130-5.el7.x86_64.rpm
3. Reboot Controller node OS



Known limitations


In order to work with PowerKVM compute nodes, the following steps are necessary

1. Disable the "oslo-i18n" package from yum repo:
In the file /etc/yum.repo.d/base.repo, at the end of section
[powerkvm-updates], add the following line
exclude=python2-oslo-i18n-*

If such package is already installed before installing IBM Cloud Manager with
OpenStack, remove it before start installing.

2. Set "cpu_mode" property on PowerKVM compute nodes
Right after deployment, or every time a new PowerKVM compute node is added to
the topology, on each of the PwerKVM compute nodes, make sure that in the file
/etc/nova/nova.conf, property of "cpu_mode" in [libvirt] section is set to
"none" (string value) as follows
cpu_mode=none



Installation information

Introduction


This file contains directions for installing the fix pack on the
IBM Cloud Manager with OpenStack deployment server and additional
information not available in the IBM Cloud Manager with OpenStack
Knowledge Center.

If you have already deployed a topology, you will need to update your
deployed topology after following the directions in this file. If the
special instructions in this file do not apply to your environment, you
still must update your deployed topology to apply other fixes contained in
this fix pack.

Directions for updating deployed topologies can be found in the IBM Cloud
Manager with OpenStack Knowledge Center.

Before installing

Before installing


Be aware that updating a deployed topology will stop IBM Cloud Manager with
OpenStack services on each deployed node while it is being updated. Deploying
updates should not affect active virtual machines deployed using the
IBM Cloud Manager with OpenStack self-service portal or OpenStack.

This fix pack is cumulative. You can apply this fix pack to any previous
version of IBM Cloud Manager with OpenStack 4.3.x.x.

Installing

Installing


To install the IBM Cloud Manager with OpenStack fix pack, do the following:
1. Download the fix pack archive (e.g. cmwo_fixpack_4.3.0.6.4.tar.gz) to
a temporary directory on the deployment server.
2. Change to that directory and expand the archive:
# tar -zxf cmwo_fixpack_4.3.0.6.4.tar.gz
3. Run the fix pack installer:
# ./install_cmwo_fixpack.sh
4. If the fix pack installed successfully you will see this message:
Installation of fix pack completed successfully.
Otherwise, you will this message:
ERROR: Installation of fix pack failed. See log files for details.
Additional messages will tell you where the log files are stored.

After installing

After installing


After installing the fix pack, review the following sections to determine if
there are additional actions that must be performed:
- Update cookbook versions
- Update the environment attributes
- Update the high availability (HA) environment attributes
- Update the high availability (HA) software and configuration
- Update the deployed topology

Automated environment updates

IBM Cloud Manager with OpenStack includes a tool that can be used to automatically
perform certain environment updates:
- Update cookbook version constraints
- For HA environments, update the HA attributes

To update an environment named 'my-environment' stored in the chef server
use this command:
knife os manage update environment my-environment

To update a JSON environment file named 'my-environment.json' use this
command:
knife os manage update environment my-environment.json

The file name must end with the '.json' extension. If the file refers to an
existing chef environment, the file will also be uploaded to the chef server.

Manual environment updates

If the fix pack requires other environment changes, you can edit the
environment(s) used for your topologies using the following procedure.

Installing the fix pack updates the example environments:
example-ibm-os-allinone
example-ibm-os-ha-controller-n-compute
example-ibm-os-single-controller-n-compute
example-ibm-sce
If you have created an environment for your topology, or have created
an environment file, these must be updated manually. If you do not do
this, future deploys or updates will continue to use the original
cookbook.

1. Change to the directory where you have created your topology files.

2. If you do not have your environment file, you can download the
current environment from the chef server:
# knife environment list
_default
example-ibm-os-allinone
example-ibm-os-ha-controller-n-compute
example-ibm-os-single-controller-n-compute
example-ibm-sce
test-environment

Identify the environment to change, e.g. test-environment, and download
it:

# knife environment show test-environment -d -F json > test-environment.json

3. Edit the environment file and modify it as required.

4. Save the file.

5. Upload the modified environment to the chef server:
# knife environment from file test-environment.json
Updated Environment test-environment

Update cookbook versions

This fix pack contains cookbook updates which require updates to the chef
environment(s) for your topologies.

If any of the following conditions are true, no action is required to update
cookbook versions, and you should continue with the next section of this
README file.
- You have not created an environment
- You created your environment after installing fix pack 4.3.0.6 or later.
- You updated the cookbook versions for your environment after installing
fix pack 4.3.0.6 or later.

Use the 'knife os manage update environment' command as described in
'Automated environment updates' to update your environment or
environment files.

This table lists the updated cookbook versions and the fix pack that
includes them.

Fix packCookbookCurrent version
4.3.0.1 db2 2.0.3
4.3.0.1 ibm-openstack-perf-tuning 11.0.1
4.3.0.1 ibm-openstack-zvm-driver 11.0.6
4.3.0.1 openstack-block-storage 11.1.0
4.3.0.2 apache2 3.1.0
4.3.0.2 galera 0.4.1
4.3.0.2 ibm-openstack-network 11.1.0
4.3.0.2 ibm-openstack-simple-token 11.0.1
4.3.0.2 ibm-sce 11.0.6
4.3.0.2 openstack-common 11.5.1
4.3.0.2 openstack-compute 11.1.0
4.3.0.2 openstack-identity 11.1.0
4.3.0.2 pacemaker 1.1.4
4.3.0.3 htpasswd 0.2.4
4.3.0.3 ibm-cls 1.0.0
4.3.0.3 ibm-openstack-apache-proxy 11.1.2
4.3.0.3 ibm-openstack-common 11.2.0
4.3.0.3 ibm-openstack-dr 11.0.3
4.3.0.3 ibm-openstack-ha 11.1.0
4.3.0.3 ibm-openstack-iptables 11.0.6
4.3.0.3 ibm-openstack-migration 11.0.30
4.3.0.3 ibm-openstack-powervc-driver 11.0.2
4.3.0.3 ibm-openstack-prs 11.1.0
4.3.0.3 ibm-openstack-roles 11.0.3
4.3.0.3 ibm-openstack-vmware-driver 11.0.5
4.3.0.3 mariadb 0.3.1
4.3.0.3 openstack-ops-messaging 11.1.0
4.3.0.3 rabbitmq 4.1.2
4.3.0.3 rsyslog 2.0.0
4.3.0.4 openstack-compute 11.2.0
4.3.0.4 openstack-network 11.1.0
4.3.0.4 opentack-dashboard 11.1.0
4.3.0.4 openstack-orchestration 11.1.0
4.3.0.4 ibm-openstack-common 11.4.0
4.3.0.4 ibm-openstack-ha 11.1.1
4.3.0.4 ibm-openstack-roles 11.0.5
4.3.0.4 ibm-openstack-network 11.1.1
4.3.0.5 contrail 1.0.0
4.3.0.5 ibm-openstack-common 11.4.1
4.3.0.5 ibm-openstack-ha 11.1.2
4.3.0.5 ibm-openstack-network 11.1.2
4.3.0.5 ibm-openstack-roles 11.0.6
4.3.0.5 ibm-openstack-vmware-driver 11.0.6
4.3.0.6 ibm-openstack-common 11.6.0
4.3.0.6 ibm-openstack-ha 11.1.3
4.3.0.6 pacemaker 1.1.5

Update the environment attributes

If any of the following conditions are true, no action is required to update
the environments, and you should continue with the next section of this
README file.
- You have not created an environment
- You created your environment after installing fix pack 4.3.0.6 or later.
- You updated the attributes for your environment after installing
fix pack 4.3.0.6 or later.

If you have not already done so, use the 'knife os manage update environment'
command as described in 'Automated environment updates' to update your
environment or environment files.

This table lists the new attributes in Fix Pack 4:
openstack.block-storage.rpc_backend = 'cinder.openstack.common.rpc.impl_kombu'
openstack.block-storage.rpc_thread_pool_size = 64
openstack.block-storage.rpc_conn_pool_size = 30
openstack.block-storage.rpc_response_timeout = 60
openstack.orchestration.platform.heat_common_packages = 'openstack-heat'
openstack.orchestration.platform.heat_api_packages = 'python-heatclient'
openstack.orchestration.platform.heat_api.cfn_packages = 'python-heatclient'
openstack.orchestration.platform.heat_api_cloudwatch_packages = 'python-heatclient'
openstack-orchestration.platform.heat_engine_packages = 'openstack-heat'
openstack.config.block_device_allocate_retries = 60
openstack.config.block_device_allocate_retries_interval = 3
ibm-openstack.first_region = true

This table lists the new attributes in Fix Pack 5:
contrail.ha = false
contrail.haproxy = true
contrail.manage_nova_compute = false
contrail.manage_neutron = false
contrail.multi_tenancy = false
contrail.router_asn = '64512'
contrail.network_ip = 'NET_VIRTUALIP'
contrail.network_pfxlen = '24'
contrail.compute.server_role = 'contrail-icm-compute'
contrail.compute.dns3 = 'DNS3'
contrail.compute.dns2 = 'DNS2'
contrail.compute.dns1 = 'DNS1'
contrail.compute.netmask = '255.255.255.0'
contrail.compute.interface = 'eth0'
contrail.compute.cidr = '10.1.1.0/24'
contrail.compute.gateway = '10.1.1.1'
contrail.compute.domain = 'test.com'
ibm-openstack.is_dedicated_node = false
ibm-openstack.use_dedicated_node = false
ibm-openstack.vmware-driver.vcenter_connection.host_port = 443
ibm-openstack.vmware-driver.vcenter_connection.http_pool_size = 50

Update the high availability (HA) environment attributes

If any of the following conditions are true, no action is required to update
the HA environments, and you should continue with the next section of this
README file.
- You have not created an HA environment
- You created your HA environment after installing fix pack 4.3.0.6 or later.
- You updated the HA attributes for your environment after installing
fix pack 4.3.0.6 or later.

If you have not already done so, use the 'knife os manage update environment'
command as described in 'Automated environment updates' to update your
HA environment or HA environment files.

This table lists the new HA attributes in Fix Pack 2:
openstack.mq.rabbitmq.heartbeat_timeout_threshold = '60'
openstack.mq.rabbitmq.heartbeat_rate = '2'

This table lists the new HA attributes in Fix Pack 3:
rabbitmq.clustering.use_auto_clustering = true
ibm-openstack.ha.pacemaker.cluster.resource.rabbitmq-meta.migration-threshold = '1'
ibm-openstack.ha.pacemaker.cluster.resource.rabbitmq-meta.failure-timeout = '160'

This table lists the new HA attributes in Fix Pack 4:
ibm-openstack.ha.use_external_db = false

This table lists the new HA attributes in Fix Pack 5:
contrail.ha = true
contrail.haproxy = true
contrail.manage_nova_compute = false
contrail.manage_neutron = false
contrail.multi_tenancy = false
contrail.router_asn = '64512'
contrail.network_ip = 'NET_VIRTUALIP'
contrail.network_pfxlen = '24'
contrail.compute.server_role = 'contrail-icm-compute'
contrail.compute.dns3 = 'DNS3'
contrail.compute.dns2 = 'DNS2'
contrail.compute.dns1 = 'DNS1'
contrail.compute.netmask = '255.255.255.0'
contrail.compute.interface = 'eth0'
contrail.compute.cidr = '10.1.1.0/24'
contrail.compute.gateway = '10.1.1.1'
contrail.compute.domain = 'test.com'
ibm-openstack.vmware-driver.vcenter_connection.host_ip = '8.8.8.8'
ibm-openstack.vmware-driver.vcenter_connection.host_username = 'admin'
ibm-openstack.vmware-driver.vcenter_connection.secret_name = 'openstack_vmware_secret_name'
ibm-openstack.vmware-driver.vcenter_connection.host_port = 443
ibm-openstack.vmware-driver.vcenter_connection.http_pool_size = 50
ibm-openstack.vmware-driver.vcenter_connection.wsdl_location = nil
ibm-openstack.vmware-driver.vcenter_connection.api_retry_count = 10
ibm-openstack.vmware-driver.vcenter_connection.task_poll_interval = 5
ibm-openstack.vmware-driver.compute.services = ['compute0']
ibm-openstack.vmware-driver.compute.compute0.compute_type = 'cluster'
ibm-openstack.vmware-driver.compute.compute0.cluster_name = ['cluster01']
ibm-openstack.vmware-driver.compute.compute0.datastore_regex = nil
ibm-openstack.vmware-driver.compute.compute0.datastore_cluster_name = nil
ibm-openstack.vmware-driver.compute.compute0.random_datastore = true
ibm-openstack.vmware-driver.compute.compute0.use_sdrs = false
ibm-openstack.vmware-driver.compute.compute0.vnc_port = 5900
ibm-openstack.vmware-driver.compute.compute0.vnc_port_total = 10000
ibm-openstack.vmware-driver.compute.compute0.use_linked_clone = true
ibm-openstack.vmware-driver.compute.compute0.vlan_interface = 'vmnic0'
ibm-openstack.vmware-driver.compute.compute0.maximum_objects = 100
ibm-openstack.vmware-driver.compute.compute0.integration_bridge = 'br-100'
ibm-openstack.vmware-driver.compute.compute0.use_displayname_uuid_for_vmname = true
ibm-openstack.vmware-driver.compute.compute0.enable_vm_hot_resize = true
ibm-openstack.vmware-driver.compute.compute0.strict_resize_memory = true
ibm-openstack.vmware-driver.compute.compute0.snapshot_image_format = 'vmdk'
ibm-openstack.vmware-driver.compute.compute0.vmwaretool_activation_enabled = true
ibm-openstack.vmware-driver.compute.compute0.domain_name = 'icm-domainname'
ibm-openstack.vmware-driver.compute.compute0.dns_suffix = 'icm.cn.ibm.com'
ibm-openstack.vmware-driver.compute.compute0.workgroup = 'WORKGROUP'
ibm-openstack.vmware-driver.compute.compute0.timezone = 90
ibm-openstack.vmware-driver.compute.compute0.organization_name = 'ibm.com'
ibm-openstack.vmware-driver.compute.compute0.product_key = ''
ibm-openstack.vmware-driver.compute.compute0.user_name = 'ibm'
ibm-openstack.vmware-driver.discovery.log.verbose = true
ibm-openstack.vmware-driver.discovery.auth.http_insecure = true
ibm-openstack.vmware-driver.discovery.auth.connection_cacert = ''
ibm-openstack.vmware-driver.discovery.common.staging_project_name = 'admin'
ibm-openstack.vmware-driver.discovery.common.staging_user = 'admin'
ibm-openstack.vmware-driver.discovery.common.instance_prefix = 'Discovered VM '
ibm-openstack.vmware-driver.discovery.common.flavor_prefix = 'Flavor for '
ibm-openstack.vmware-driver.discovery.common.instance_sync_interval = '20'
ibm-openstack.vmware-driver.discovery.common.template_sync_interval = 300
ibm-openstack.vmware-driver.discovery.common.portgroup_sync_interval = 300
ibm-openstack.vmware-driver.discovery.common.full_instance_sync_frequency = 30
ibm-openstack.vmware-driver.discovery.common.image_periodic_sync_interval_in_seconds = 300
ibm-openstack.vmware-driver.discovery.common.image_sync_retry_interval_time_in_seconds = 60
ibm-openstack.vmware-driver.discovery.common.image_limit = 500
ibm-openstack.vmware-driver.discovery.common.longrun_loop_interval = 7
ibm-openstack.vmware-driver.discovery.common.longrun_initial_delay = 10
ibm-openstack.vmware-driver.discovery.common.vmware_default_image_name = 'VMware Unknown Image'
ibm-openstack.vmware-driver.discovery.common.vm_ignore_list = ''
ibm-openstack.vmware-driver.discovery.common.allow_instance_deletion = true
ibm-openstack.vmware-driver.discovery.common.allow_template_deletion = true
ibm-openstack.vmware-driver.discovery.common.property_collector_max = 4000
ibm-openstack.vmware-driver.discovery.common.clusters = []
ibm-openstack.vmware-driver.discovery.common.host_resource_pools = []
ibm-openstack.vmware-driver.discovery.common.cluster_resource_pools = []
ibm-openstack.vmware-driver.discovery.network.physical_network_mappings = 'physnet1:vSwitch0'
ibm-openstack.vmware-driver.discovery.network.port_group_filter_list = []
ibm-openstack.vmware-driver.discovery.network.tenant_name = 'admin'
ibm-openstack.vmware-driver.discovery.network.allow_neutron_deletion = false
ibm-openstack.vmware-driver.network.use_dvs = true
ibm-openstack.vmware-driver.network.network_maps = 'physnet2:dvSwitch'
ibm-openstack.vmware-driver.block-storage.driver = 'cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver'
ibm-openstack.vmware-driver.block-storage.vmware_volume_folder = 'cinder-volumes'
ibm-openstack.vmware-driver.block-storage.vmware_image_transfer_timeout_secs = 7200
ibm-openstack.vmware-driver.block-storage.vmware_max_objects_retrieval = 100
ibm-openstack.vmware-driver.image.default_store = 'file'
ibm-openstack.vmware-driver.image.stores = ['file','http']
ibm-openstack.vmware-driver.image.show_image_direct_url = true
ibm-openstack.vmware-driver.image.vmware_datacenter_path = ''
ibm-openstack.vmware-driver.image.vmware_datastore_name = ''
ibm-openstack.vmware-driver.image.vmware_store_image_dir = '/openstack_glance'
ibm-openstack.vmware-driver.image.vmware_api_insecure = false

This table lists new HA attribute in Fix Pack 6:
ibm-openstack.ha.multi-site = false
ibm-openstack.ha.pacemaker.cluster.node.constraint_location_score = nil
ibm-openstack.ha.pacemaker.properties.no-quorum-policy.value = stop

Update the high availability (HA) software and configuration from Fix Pack 1

If you deployed high availability (HA) topologies using Fix Pack 1 and have
not updated them to Fix Pack 2 or later then perform the actions in this section.
Otherwise, you should continue with the next section of this README file.

Fix Pack 2 and later contains an updated version of the RabbitMQ messaging software
used in HA topologies. The fix pack also contains a change to the Pacemaker
DB2 HADR agent configuration to fix a problem where Pacemaker repeatedly
tries to restart DB2 HADR on a failing node and never promotes another node
to become the master.

Special steps are required to upgrade RabbitMQ and update the DB2 HADR agent
configuration on the HA controller nodes. Perform the following commands on
the HA controllers under root authority:

1. On any HA controller node, run these commands:
"pcs resource update ibm-os-db2hadr-master meta migration-threshold=3 failure-timeout=5m"
"pcs resource update ibm-os-rabbitmq meta migration-threshold=1 failure-timeout=160"

2. On any HA controller node, run this command:
"pcs resource disable ibm-os-rabbitmq --wait=450"

3. Since the previous command takes variable time to complete, and may return
a timeout error, you should run the following command until you see that
ibm-os-rabbitmq is stopped on all the HA controller nodes:
"pcs resource | grep -A1 ibm-os-rabbitmq-clone"

4. On each HA controller node, run these commands:
"yum clean expire-cache"
"yum update rabbitmq-server"
"yum update python-oslo-messaging"

5. On any HA controller node, run this command:
"pcs resource enable ibm-os-rabbitmq --wait=450"

6. Since the previous command takes variable time to complete, and may return
a timeout error, you should run the following command until you see that
ibm-os-rabbitmq is started on all the HA controller nodes:
"pcs resource | grep -A1 ibm-os-rabbitmq-clone"

Update the high availability (HA) configuration from Fix Pack 2

If you deployed high availability (HA) topologies using Fix Pack 2 and have not updated
them to Fix Pack 3 or later, then perform the actions in this section.
Otherwise, you should continue with the next section of this README file.

1. On any HA controller node, run this command:
"pcs resource update ibm-os-rabbitmq meta migration-threshold=1 failure-timeout=160"

Update the deployed topology

After making the changes described above, update your deployed topology to
apply the fixes contained in this fix pack.

If you did not deploy a topology prior to installing this fix pack, no
further action is required.

The IBM Cloud Manager with OpenStack Knowledge Center has more information
on updating a deployed topology.


Uninstalling

Uninstalling


This fix pack cannot be uninstalled.


List of fixes

Update log (12/09/2016):
IBM Cloud Manager with OpenStack 4.3 ifix 4.3.0.6.4 includes:
- Add support for PowerVC 1.3.2 and 1.3.1.
- Fix the problem that command "openstackclient quota show" cannot work correctly.
- Fix the problem that command "openstackclient quota set" cannot work correctly.
- Fix the problem that Server flavor change (resize) ends with an error.
- Fix about Case error about the deploy name which hypv is powervc.
- Add supporting update topology with no root user.
- Fix request detail page crash error in horizon.
- Fix Vmware region error: resize always fails for image with capacity larger than 1 GB.
- Fix the get_file error in heat template which will cause the stack create/update/preview fail.
- OpenStack Kilo ifixes for PSIRT for ICM "Open Source Apache Xerces-C XML parser Vulnerabilities -- including XML4C" (CVE-2016-0729)
- OpenStack Kilo ifixes for PSIRT for ICM "Open Source Apache Xerces-C XML parser Vulnerabilities" (CVE-2016-4463)
- IBM SmartCloud Entry JRE updated for PSIRT for SCE/ICM "IBM SDK, Java Technology Edition Quarterly CPU - Jul 2016 -
Includes Oracle Jul 2016 CPU" (CVE-2016-3610 CVE-2016-3598 CVE-2016-3606 CVE-2016-3587 CVE-2016-3511
CVE-2016-3508 CVE-2016-3550 CVE-2016-3500 CVE-2016-3458 CVE-2016-3485 Not Applicable CVE-2016-3498 CVE-2016-3552 CVE-2016-3503)
- IBM SmartCloud Entry JRE updated for PSIRT for SCE/ICM "IBM SDK, Java Technology Edition Quarterly CPU - Oct 2016 -
Includes Oracle Oct 2016 CPU" (CVE-2016-5582 CVE-2016-5568 CVE-2016-5556 CVE-2016-5573 CVE-2016-5597 CVE-2016-5554 CVE-2016-5542)

Update log (10/16/2016):
IBM Cloud Manager with OpenStack 4.3 ifix 4.3.0.6.3a includes:
- A Fix using non-default port to connect to vcenter will raise vmware discovery error.
- Fix the problem that incomplete instance cannot be deleted.
- Modify the dependency relationship so that kombu newer than 3.0.0 will be installed automatically.
- Add support for cinder backup service
- OpenStack Kilo ifixes for PSIRT for ICM "opensource openstack vuln." (CVE-2016-2140)

Update log (09/23/2016):
IBM Cloud Manager with OpenStack 4.3 ifix 4.3.0.6.2 includes:
- Add new property - Port in vmware cinder template to let customer use non-default port to connect to vcenter.
- OpenStack Kilo ifixes for PSIRT for ICM "opensource openstack vuln." (CVE-2016-0757)
- Modify the ceilometer db clean scripts using Nonsql method to instead the old one
- Add support for High availability (HA) and Maintenance mode to move VM instances which booted from volume away from the related host automatically when using Platform Resource Scheduler(PRS)

Update log (07/25/2016):
IBM Cloud Manager with OpenStack 4.3 ifix 4.3.0.6.1 includes:
- Fix VMware-driver code issue on vCenter 5.5u3 with SDRS enabled
- IBM SmartCloud Entry JRE update for PSIRT for SCE/ICM "IBM SDK, Java Technology Edition Quarterly CPU - Apr 2016 - Includes Oracle Apr 2016 CPU + 3 IBM CVEs CVE-2016-3443 CVE-2016-0687 CVE-2016-0686 CVE-2016-3427 CVE-2016-3449 CVE-2016-3425 CVE-2016-3422 CVE-2016-0695 CVE-2016-3426 CVE-2016-0636 CVE-2016-0363 CVE-2016-0376 CVE-2016-0264"
- OpenStack Kilo ifixes for PSIRT for ICM/SCE Appliance "opensource openstack vuln." (CVE-2015-8749 CVE-2015-7548 CVE-2015-8466 CVE-2015-5295 CVE-2015-5306 CVE-2015-1850 CVE-2015-8749 CVE-2015-7548 CVE-2015-8466 CVE-2015-5295 CVE-2015-5306 CVE-2015-1850)
- IBM Cloud Manager with OpenStack Chef OpenSSL update for PSIRT for ICM: "OpenSource OpenSSL Vuln." (CVE-2016-0701 CVE-2015-3197 CVE-2016-0705 CVE-2016-0798 CVE-2016-0797 CVE-2016-0799 CVE-2016-0702 CVE-2016-0703 CVE-2016-0704 CVE-2016-2842 CVE-2016-2108 CVE-2016-2107 CVE-2016-2105 CVE-2016-2106 CVE-2016-2109 CVE-2016-2176)



Copyright and trademark information

This fix is subject to the terms of the license agreement which accompanied, or was contained in, the Program for which you are obtaining the fix. You are not authorized to install or use the fix except as part of a Program for which you have a valid Proof of Entitlement.

SUBJECT TO ANY WARRANTIES WHICH CAN NOT BE EXCLUDED OR EXCEPT AS EXPLICITLY AGREED TO IN THE APPLICABLE LICENSE AGREEMENT OR AN APPLICABLE SUPPORT AGREEMENT, IBM MAKES NO WARRANTIES OR CONDITIONS EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON INFRINGEMENT, REGARDING THE PTF.

By furnishing this document, IBM grants no licenses to any related patents or copyrights.

The applicable license agreement may have been provided to you in printed form and/or may be viewed at http://www-03.ibm.com/software/sla/sladb.nsf/viewbla/


Copyright © IBM Corporation 2015, 2016