Below is a list of components, platforms, and file names that apply to this Readme file.
After installing
After installing the fix pack, review the following sections to determine if
there are additional actions that must be performed:
- Update cookbook versions
- Update the environment attributes
- Update the high availability (HA) environment attributes
- Update the high availability (HA) software and configuration
- Update the deployed topology
Automated environment updates
IBM Cloud Manager with OpenStack includes a tool that can be used to automatically
perform certain environment updates:
- Update cookbook version constraints
- For HA environments, update the HA attributes
To update an environment named 'my-environment' stored in the chef server
use this command:
knife os manage update environment my-environment
To update a JSON environment file named 'my-environment.json' use this
command:
knife os manage update environment my-environment.json
The file name must end with the '.json' extension. If the file refers to an
existing chef environment, the file will also be uploaded to the chef server.
Manual environment updates
If the fix pack requires other environment changes, you can edit the
environment(s) used for your topologies using the following procedure.
Installing the fix pack updates the example environments:
example-ibm-os-allinone
example-ibm-os-ha-controller-n-compute
example-ibm-os-single-controller-n-compute
example-ibm-sce
If you have created an environment for your topology, or have created
an environment file, these must be updated manually. If you do not do
this, future deploys or updates will continue to use the original
cookbook.
1. Change to the directory where you have created your topology files.
2. If you do not have your environment file, you can download the
current environment from the chef server:
# knife environment list
_default
example-ibm-os-allinone
example-ibm-os-ha-controller-n-compute
example-ibm-os-single-controller-n-compute
example-ibm-sce
test-environment
Identify the environment to change, e.g. test-environment, and download
it:
# knife environment show test-environment -d -F json > test-environment.json
3. Edit the environment file and modify it as required.
4. Save the file.
5. Upload the modified environment to the chef server:
# knife environment from file test-environment.json
Updated Environment test-environment
Update cookbook versions
This fix pack contains cookbook updates which require updates to the chef
environment(s) for your topologies.
If any of the following conditions are true, no action is required to update
cookbook versions, and you should continue with the next section of this
README file.
- You have not created an environment
- You created your environment after installing fix pack 4.3.0.8 or later.
- You updated the cookbook versions for your environment after installing
fix pack 4.3.0.8 or later.
Use the 'knife os manage update environment' command as described in
'Automated environment updates' to update your environment or
environment files.
This table lists the updated cookbook versions and the fix pack that
includes them.
Fix pack | Cookbook | Current version |
4.3.0.1 |
db2 |
2.0.3 |
4.3.0.1 |
ibm-openstack-perf-tuning |
11.0.1 |
4.3.0.1 |
ibm-openstack-zvm-driver |
11.0.6 |
4.3.0.1 |
openstack-block-storage |
11.1.0 |
4.3.0.2 |
apache2 |
3.1.0 |
4.3.0.2 |
galera |
0.4.1 |
4.3.0.2 |
ibm-openstack-network |
11.1.0 |
4.3.0.2 |
ibm-openstack-simple-token |
11.0.1 |
4.3.0.2 |
ibm-sce |
11.0.6 |
4.3.0.2 |
openstack-common |
11.5.1 |
4.3.0.2 |
openstack-compute |
11.1.0 |
4.3.0.2 |
openstack-identity |
11.1.0 |
4.3.0.2 |
pacemaker |
1.1.4 |
4.3.0.3 |
htpasswd |
0.2.4 |
4.3.0.3 |
ibm-cls |
1.0.0 |
4.3.0.3 |
ibm-openstack-apache-proxy |
11.1.2 |
4.3.0.3 |
ibm-openstack-common |
11.2.0 |
4.3.0.3 |
ibm-openstack-dr |
11.0.3 |
4.3.0.3 |
ibm-openstack-ha |
11.1.0 |
4.3.0.3 |
ibm-openstack-iptables |
11.0.6 |
4.3.0.3 |
ibm-openstack-migration |
11.0.30 |
4.3.0.3 |
ibm-openstack-powervc-driver |
11.0.2 |
4.3.0.3 |
ibm-openstack-prs |
11.1.0 |
4.3.0.3 |
ibm-openstack-roles |
11.0.3 |
4.3.0.3 |
ibm-openstack-vmware-driver |
11.0.5 |
4.3.0.3 |
mariadb |
0.3.1 |
4.3.0.3 |
openstack-ops-messaging |
11.1.0 |
4.3.0.3 |
rabbitmq |
4.1.2 |
4.3.0.3 |
rsyslog |
2.0.0 |
4.3.0.4 |
openstack-compute |
11.2.0 |
4.3.0.4 |
openstack-network |
11.1.0 |
4.3.0.4 |
opentack-dashboard |
11.1.0 |
4.3.0.4 |
openstack-orchestration |
11.1.0 |
4.3.0.4 |
ibm-openstack-common |
11.4.0 |
4.3.0.4 |
ibm-openstack-ha |
11.1.1 |
4.3.0.4 |
ibm-openstack-roles |
11.0.5 |
4.3.0.4 |
ibm-openstack-network |
11.1.1 |
4.3.0.5 |
contrail |
1.0.0 |
4.3.0.5 |
ibm-openstack-common |
11.4.1 |
4.3.0.5 |
ibm-openstack-ha |
11.1.2 |
4.3.0.5 |
ibm-openstack-network |
11.1.2 |
4.3.0.5 |
ibm-openstack-roles |
11.0.6 |
4.3.0.5 |
ibm-openstack-vmware-driver |
11.0.6 |
4.3.0.6 |
ibm-openstack-common |
11.6.0 |
4.3.0.6 |
ibm-openstack-ha |
11.1.3 |
4.3.0.6 |
pacemaker |
1.1.5 |
4.3.0.7 |
openstack-compute |
11.3.0 |
4.3.0.7 |
ibm-openstack-common |
11.8.0 |
4.3.0.7 |
ibm-openstack-ha |
11.1.4 |
4.3.0.7 |
ibm-openstack-roles |
11.0.7 |
4.3.0.7 |
ibm-openstack-vmware-driver |
11.0.7 |
4.3.0.7 |
ibm-openstack-yum-server |
11.0.1 |
4.3.0.7 |
ibm-openstack-zvm-driver |
11.1.1 |
4.3.0.8 |
openstack-compute |
11.3.8 |
4.3.0.8 |
openstack-orchestration |
11.2.0 |
4.3.0.8 |
ibm-openstack-common |
11.9.0 |
4.3.0.8 |
ibm-openstack-vmware-driver |
11.0.8 |
4.3.0.8 |
ibm-openstack-yum-server |
11.0.2 |
Update the environment attributes
If any of the following conditions are true, no action is required to update
the environments, and you should continue with the next section of this
README file.
- You have not created an environment
- You created your environment after installing fix pack 4.3.0.8 or later.
- You updated the attributes for your environment after installing
fix pack 4.3.0.8 or later.
If you have not already done so, use the 'knife os manage update environment'
command as described in 'Automated environment updates' to update your
environment or environment files.
This table lists the new attributes in Fix Pack 4:
openstack.block-storage.rpc_backend = 'cinder.openstack.common.rpc.impl_kombu'
openstack.block-storage.rpc_thread_pool_size = 64
openstack.block-storage.rpc_conn_pool_size = 30
openstack.block-storage.rpc_response_timeout = 60
openstack.orchestration.platform.heat_common_packages = 'openstack-heat'
openstack.orchestration.platform.heat_api_packages = 'python-heatclient'
openstack.orchestration.platform.heat_api.cfn_packages = 'python-heatclient'
openstack.orchestration.platform.heat_api_cloudwatch_packages = 'python-heatclient'
openstack-orchestration.platform.heat_engine_packages = 'openstack-heat'
openstack.config.block_device_allocate_retries = 60
openstack.config.block_device_allocate_retries_interval = 3
ibm-openstack.first_region = true
This table lists the new attributes in Fix Pack 5:
contrail.ha = false
contrail.haproxy = true
contrail.manage_nova_compute = false
contrail.manage_neutron = false
contrail.multi_tenancy = false
contrail.router_asn = '64512'
contrail.network_ip = 'NET_VIRTUALIP'
contrail.network_pfxlen = '24'
contrail.compute.server_role = 'contrail-icm-compute'
contrail.compute.dns3 = 'DNS3'
contrail.compute.dns2 = 'DNS2'
contrail.compute.dns1 = 'DNS1'
contrail.compute.netmask = '255.255.255.0'
contrail.compute.interface = 'eth0'
contrail.compute.cidr = '10.1.1.0/24'
contrail.compute.gateway = '10.1.1.1'
contrail.compute.domain = 'test.com'
ibm-openstack.is_dedicated_node = false
ibm-openstack.use_dedicated_node = false
ibm-openstack.vmware-driver.vcenter_connection.host_port = 443
ibm-openstack.vmware-driver.vcenter_connection.http_pool_size = 50
Update the high availability (HA) environment attributes
If any of the following conditions are true, no action is required to update
the HA environments, and you should continue with the next section of this
README file.
- You have not created an HA environment
- You created your HA environment after installing fix pack 4.3.0.8 or later.
- You updated the HA attributes for your environment after installing
fix pack 4.3.0.8 or later.
If you have not already done so, use the 'knife os manage update environment'
command as described in 'Automated environment updates' to update your
HA environment or HA environment files.
This table lists the new HA attributes in Fix Pack 2:
openstack.mq.rabbitmq.heartbeat_timeout_threshold = '60'
openstack.mq.rabbitmq.heartbeat_rate = '2'
This table lists the new HA attributes in Fix Pack 3:
rabbitmq.clustering.use_auto_clustering = true
ibm-openstack.ha.pacemaker.cluster.resource.rabbitmq-meta.migration-threshold = '1'
ibm-openstack.ha.pacemaker.cluster.resource.rabbitmq-meta.failure-timeout = '160'
This table lists the new HA attributes in Fix Pack 4:
ibm-openstack.ha.use_external_db = false
This table lists the new HA attributes in Fix Pack 5:
contrail.ha = true
contrail.haproxy = true
contrail.manage_nova_compute = false
contrail.manage_neutron = false
contrail.multi_tenancy = false
contrail.router_asn = '64512'
contrail.network_ip = 'NET_VIRTUALIP'
contrail.network_pfxlen = '24'
contrail.compute.server_role = 'contrail-icm-compute'
contrail.compute.dns3 = 'DNS3'
contrail.compute.dns2 = 'DNS2'
contrail.compute.dns1 = 'DNS1'
contrail.compute.netmask = '255.255.255.0'
contrail.compute.interface = 'eth0'
contrail.compute.cidr = '10.1.1.0/24'
contrail.compute.gateway = '10.1.1.1'
contrail.compute.domain = 'test.com'
ibm-openstack.vmware-driver.vcenter_connection.host_ip = '8.8.8.8'
ibm-openstack.vmware-driver.vcenter_connection.host_username = 'admin'
ibm-openstack.vmware-driver.vcenter_connection.secret_name = 'openstack_vmware_secret_name'
ibm-openstack.vmware-driver.vcenter_connection.host_port = 443
ibm-openstack.vmware-driver.vcenter_connection.http_pool_size = 50
ibm-openstack.vmware-driver.vcenter_connection.wsdl_location = nil
ibm-openstack.vmware-driver.vcenter_connection.api_retry_count = 10
ibm-openstack.vmware-driver.vcenter_connection.task_poll_interval = 5
ibm-openstack.vmware-driver.compute.services = ['compute0']
ibm-openstack.vmware-driver.compute.compute0.compute_type = 'cluster'
ibm-openstack.vmware-driver.compute.compute0.cluster_name = ['cluster01']
ibm-openstack.vmware-driver.compute.compute0.datastore_regex = nil
ibm-openstack.vmware-driver.compute.compute0.datastore_cluster_name = nil
ibm-openstack.vmware-driver.compute.compute0.random_datastore = true
ibm-openstack.vmware-driver.compute.compute0.use_sdrs = false
ibm-openstack.vmware-driver.compute.compute0.vnc_port = 5900
ibm-openstack.vmware-driver.compute.compute0.vnc_port_total = 10000
ibm-openstack.vmware-driver.compute.compute0.use_linked_clone = true
ibm-openstack.vmware-driver.compute.compute0.vlan_interface = 'vmnic0'
ibm-openstack.vmware-driver.compute.compute0.maximum_objects = 100
ibm-openstack.vmware-driver.compute.compute0.integration_bridge = 'br-100'
ibm-openstack.vmware-driver.compute.compute0.use_displayname_uuid_for_vmname = true
ibm-openstack.vmware-driver.compute.compute0.enable_vm_hot_resize = true
ibm-openstack.vmware-driver.compute.compute0.strict_resize_memory = true
ibm-openstack.vmware-driver.compute.compute0.snapshot_image_format = 'vmdk'
ibm-openstack.vmware-driver.compute.compute0.vmwaretool_activation_enabled = true
ibm-openstack.vmware-driver.compute.compute0.domain_name = 'icm-domainname'
ibm-openstack.vmware-driver.compute.compute0.dns_suffix = 'icm.cn.ibm.com'
ibm-openstack.vmware-driver.compute.compute0.workgroup = 'WORKGROUP'
ibm-openstack.vmware-driver.compute.compute0.timezone = 90
ibm-openstack.vmware-driver.compute.compute0.organization_name = 'ibm.com'
ibm-openstack.vmware-driver.compute.compute0.product_key = ''
ibm-openstack.vmware-driver.compute.compute0.user_name = 'ibm'
ibm-openstack.vmware-driver.discovery.log.verbose = true
ibm-openstack.vmware-driver.discovery.auth.http_insecure = true
ibm-openstack.vmware-driver.discovery.auth.connection_cacert = ''
ibm-openstack.vmware-driver.discovery.common.staging_project_name = 'admin'
ibm-openstack.vmware-driver.discovery.common.staging_user = 'admin'
ibm-openstack.vmware-driver.discovery.common.instance_prefix = 'Discovered VM '
ibm-openstack.vmware-driver.discovery.common.flavor_prefix = 'Flavor for '
ibm-openstack.vmware-driver.discovery.common.instance_sync_interval = '20'
ibm-openstack.vmware-driver.discovery.common.template_sync_interval = 300
ibm-openstack.vmware-driver.discovery.common.portgroup_sync_interval = 300
ibm-openstack.vmware-driver.discovery.common.full_instance_sync_frequency = 30
ibm-openstack.vmware-driver.discovery.common.image_periodic_sync_interval_in_seconds = 300
ibm-openstack.vmware-driver.discovery.common.image_sync_retry_interval_time_in_seconds = 60
ibm-openstack.vmware-driver.discovery.common.image_limit = 500
ibm-openstack.vmware-driver.discovery.common.longrun_loop_interval = 7
ibm-openstack.vmware-driver.discovery.common.longrun_initial_delay = 10
ibm-openstack.vmware-driver.discovery.common.vmware_default_image_name = 'VMware Unknown Image'
ibm-openstack.vmware-driver.discovery.common.vm_ignore_list = ''
ibm-openstack.vmware-driver.discovery.common.allow_instance_deletion = true
ibm-openstack.vmware-driver.discovery.common.allow_template_deletion = true
ibm-openstack.vmware-driver.discovery.common.property_collector_max = 4000
ibm-openstack.vmware-driver.discovery.common.clusters = []
ibm-openstack.vmware-driver.discovery.common.host_resource_pools = []
ibm-openstack.vmware-driver.discovery.common.cluster_resource_pools = []
ibm-openstack.vmware-driver.discovery.network.physical_network_mappings = 'physnet1:vSwitch0'
ibm-openstack.vmware-driver.discovery.network.port_group_filter_list = []
ibm-openstack.vmware-driver.discovery.network.tenant_name = 'admin'
ibm-openstack.vmware-driver.discovery.network.allow_neutron_deletion = false
ibm-openstack.vmware-driver.network.use_dvs = true
ibm-openstack.vmware-driver.network.network_maps = 'physnet2:dvSwitch'
ibm-openstack.vmware-driver.block-storage.driver = 'cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver'
ibm-openstack.vmware-driver.block-storage.vmware_volume_folder = 'cinder-volumes'
ibm-openstack.vmware-driver.block-storage.vmware_image_transfer_timeout_secs = 7200
ibm-openstack.vmware-driver.block-storage.vmware_max_objects_retrieval = 100
ibm-openstack.vmware-driver.image.default_store = 'file'
ibm-openstack.vmware-driver.image.stores = ['file','http']
ibm-openstack.vmware-driver.image.show_image_direct_url = true
ibm-openstack.vmware-driver.image.vmware_datacenter_path = ''
ibm-openstack.vmware-driver.image.vmware_datastore_name = ''
ibm-openstack.vmware-driver.image.vmware_store_image_dir = '/openstack_glance'
ibm-openstack.vmware-driver.image.vmware_api_insecure = false
This table lists new HA attribute in Fix Pack 6:
ibm-openstack.ha.multi-site = false
ibm-openstack.ha.pacemaker.cluster.node.constraint_location_score = nil
ibm-openstack.ha.pacemaker.properties.no-quorum-policy.value = stop
Update the high availability (HA) software and configuration from Fix Pack 1
If you deployed high availability (HA) topologies using Fix Pack 1 and have
not updated them to Fix Pack 2 or later then perform the actions in this section.
Otherwise, you should continue with the next section of this README file.
Fix Pack 2 and later contains an updated version of the RabbitMQ messaging software
used in HA topologies. The fix pack also contains a change to the Pacemaker
DB2 HADR agent configuration to fix a problem where Pacemaker repeatedly
tries to restart DB2 HADR on a failing node and never promotes another node
to become the master.
Special steps are required to upgrade RabbitMQ and update the DB2 HADR agent
configuration on the HA controller nodes. Perform the following commands on
the HA controllers under root authority:
1. On any HA controller node, run these commands:
"pcs resource update ibm-os-db2hadr-master meta migration-threshold=3 failure-timeout=5m"
"pcs resource update ibm-os-rabbitmq meta migration-threshold=1 failure-timeout=160"
2. On any HA controller node, run this command:
"pcs resource disable ibm-os-rabbitmq --wait=450"
3. Since the previous command takes variable time to complete, and may return
a timeout error, you should run the following command until you see that
ibm-os-rabbitmq is stopped on all the HA controller nodes:
"pcs resource | grep -A1 ibm-os-rabbitmq-clone"
4. On each HA controller node, run these commands:
"yum clean expire-cache"
"yum update rabbitmq-server"
"yum update python-oslo-messaging"
5. On any HA controller node, run this command:
"pcs resource enable ibm-os-rabbitmq --wait=450"
6. Since the previous command takes variable time to complete, and may return
a timeout error, you should run the following command until you see that
ibm-os-rabbitmq is started on all the HA controller nodes:
"pcs resource | grep -A1 ibm-os-rabbitmq-clone"
Update the high availability (HA) configuration from Fix Pack 2
If you deployed high availability (HA) topologies using Fix Pack 2 and have not updated
them to Fix Pack 3 or later, then perform the actions in this section.
Otherwise, you should continue with the next section of this README file.
1. On any HA controller node, run this command:
"pcs resource update ibm-os-rabbitmq meta migration-threshold=1 failure-timeout=160"
Update the deployed topology
After making the changes described above, update your deployed topology to
apply the fixes contained in this fix pack.
If you did not deploy a topology prior to installing this fix pack, no
further action is required.
The IBM Cloud Manager with OpenStack Knowledge Center has more information
on updating a deployed topology.
List of fixes
4.3.0.1:
Mandatory GA fix pack
4.3.0.2:
- Update RabbitMQ server to fix problems with network partition detection and handling.
- Compute node services are unavailable after loss or shutdown of an HA controller node.
- Pacemaker repeatedly tries to restart DB2 HADR on a failing node and does not promote another node to become the primary.
- DB2 HADR became "split-brain" after updating hadr_target_list during promotion and HADR would no longer start for that database.
- Check that HADR is started for all databases before promoting a node to be DB2 HADR primary to prevent failures after promoting some databases to be HADR primary on another server.
- Cinder and heat services are down after moving the DB2 HADR primary.
- Self-service backend misses cloud updates after losing database connection.
- "knife os manage remove node" command allows HA controller node with DB2 to be removed.
- Adding HA controller node fails if another HA controller node is shutdown.
4.3.0.3:
- Improve performance of HA topology deployments and updates.
- Add support for deploying an HA topology with the IBM Platform Resources Scheduler enabled.
- Add support for deploying an HA topology with a public virtual IP address.
- Add support for defining custom HAProxy listeners when deploying or updating an HA topology.
- Improve DB2 HADR recovery in an HA cloud environment.
- HA documentation improvements.
- Improve deployment and management of an HA cloud environment from the IBM Cloud Manager with OpenStack - Deployer user interface.
- Fix RabbitMQ network partition problem during HA topology deployment.
- Add Central Logging Server feature to support ICM Service level logging and monitoring
- Add VMWare support in IBM Cloud Manager - Deployer graphical user interface.
- Add support for configuring the keystone identity backend with read-only LDAP.
- Add support for deploying with DVR enabled.
- Add support for deploying with FWaaS enabled.
- Add support for deploying topology with KVM for IBM z Systems as compute node.
4.3.0.4:
- Add support for deploying an multi-region HA topology.
- Add support for deploying an HA topology with external DB2.
- Add support for deploying ICM controller with Nuage plugin.
- Add documentation for keystone to keystone federation.
4.3.0.5:
- Add support for deploying HA topology with Contrail neutron plugin
- Add support for deploying HA topology with Nuage neutron plugin
- Add support for deploying HA topology with dedicated node for keystone and horizon
4.3.0.6:
- Add support for deploying a multi-site HA topology.
- Add documentation for ICM online backup/restore.
4.3.0.7:
- Add support for RHEL7.2 x86_64 controller node and x86_64 KVM compute node
- Add supoort for DB2 10.5.7
- Add support for VMWare vCenter Server 6.0.2
- Add support for PowerVC 1.3.1.2 and PowerVC 1.3.2
- Add support for updating topology with non-root user
- Update JRE to 7.0.9.60
- Update pacemaker to 1.1.13
- Add support for customing the discovery services switches for vCenter resources(instance, image and network)
- SE66462 Fix the problem that only get 99 dvs portgroups
- SE66461 Fix the problem that the nova-compute service initialization failed when vmware instance name contains double - bytes.
- SE66580 Fix the problem that OpenStack not reporting resources correctly for reassigned vmware instances
- SE66424 Fix arguments error at call timeutils.utcow which will raise issue in ceilometer central.log
- SE66898 Fix the MongoDB ReplicaSet loses connection error in ceilometer api.log
- SE66337 Fix the libvirt driver broken for non-disk-image backends
- SE66841 Verify the FIPS function and fix FIPS configuration typo in nova configuration file
- SE66116 Fix the status error for all the resources in cluster was displayed as "FAILED" or "Stopped"
- OpenStack Kilo ifixes for PSIRT for ICM "Open Source OpenStack Neutron ,Horizon and Ironic Vulnerabilities" (CVE-2015-8914 CVE-2016-5363 CVE2016-4428 CVE-2016-5362)
- OpenStack Kilo ifixes for PSIRT for ICM "Open Source OpenStack Glance Vulnerabilities" (CVE-2015-5162)
- IBM SmartCloud Entry OpenSSL updated for PSIRT for SCE/ICM "OpenSSL Security Advisory [22 Sep 2016] and [26 Sep 2016]" (CVE-2016-6302 CVE-2016-6305 CVE-2016-6303 CVE-2016-2182 CVE-2016-2180 CVE-2016-2177 CVE-2016-2178 CVE-2016-2179 CVE-2016-6306 CVE-2016-6307 CVE-2016-6308 CVE-2016-2181 CVE-2016-6309 CVE-2016-7052 CVE-2016-6304 CVE-2016-2183)
- IBM SmartCloud Entry OpenSSL updated for PSIRT for SCE/ICM "Open Source OpenSSL, GNUTls, RHEL CVE-2016-8610 'SSL-Death-Alert' " (CVE-2016-8610)
4.3.0.8:
- Add support for RHEL7.3 x86_64 controller node and x86_64 KVM compute node
- Add supoort for DB2 10.5.8
- Add support for VMWare vCenter Server 6.5
- Add support for PowerVC 1.3.3
- Add support for deploying with https on non-HA and HA topology
- Add support for updating from http to https on non-HA and HA topology
- Update JRE to 7.0.10.5
- Update rabbitmq to 3.6.5
- SE66988 Add support for vmware network folder in subfolder
- SE67124 Set destroy_after_evacuate to false
- SE67283 Change for local variable 'datastore_in_flavor_name' referenced before assignment
- SE67337 Add support for the datastore NFS41 type
- SE67317 Change the prompts when validating a node's BOOTPROTO configuration
- SE67396 Add Ironic-api service into the role "ibm-os-single-controller-distributed-database-node"
- SE66282 Add support to check the service status on the Central Logging Server
- SE67196 IBM Cloud Manager self service portal users cannot log in when authentication is LDAP
- OpenStack Kilo ifixes for PSIRT for ICM "Open Source OpenStack Heat Vulnerabilities" ( CVE-2016-9185 )
- OpenStack Kilo ifixes for PSIRT for ICM "Open Source RabbitMQ Vulnerabilities" (CVE-2015-8786 )
- IBM SmartCloud Entry JRE updated for PSIRT for SCE/ICM "IBM SDK, Java Technology Edition Quarterly CPU - Jan 2017 - Includes Oracle Jan 2017 CPU " (CVE-2016-2183 CVE-2017-3289 CVE-2017-3272 CVE-2017-3241 CVE-2017-3260 CVE-2016-5546 CVE-2017-3253 CVE-2016-5548 CVE-2016-5549 CVE-2017-3252 CVE-2016-5547 CVE-2016-5552 CVE-2017-3261 CVE-2017-3231 CVE-2017-3259 CVE-2016-2183 )
- IBM SmartCloud Entry JRE updated for PSIRT for SCE/ICM "IBM SDK, Java Technology Edition Quarterly CPU - Apr 2017 - Includes Oracle Apr 2017 CPU" (CVE-2017-3514 CVE-2017-3512 CVE-2017-3511 CVE-2017-3526 CVE-2017-3509 CVE-2017-3544 CVE-2017-3533 CVE-2017-3539 CVE-2017-1289 CVE-2016-9840 CVE-2016-9841 CVE-2016-9842 CVE-2016-9843 )
Contents of Fix/Service Pack build
version: 4.3.0.8
- IBM Cloud Manager with OpenStack: 4.3.0.8-20170615-0305
- OpenStack: openstack-kilo-proposed-rhel7.1-D20160310-0849
- OpenStack-fc19: openstack-kilo-proposed-fc19-D20160310-0853
- PowerVC Driver: 2015.1-2.1.ibm.201612070446
- Vmware Driver: 2015.1-201705160400
- Self-Service Portal: IBM-sce.430.FP008-20170612-0507
- PRS: D20160921-0500
- Deployer UI: 4.3.0.3-201508280014.ibm.el6.7
- Docker Container: docker_container-1.0.0.0-201505150838
- DR: dr-4.3-D20150805-1258
- Self-Service UI: Self-Service-UI.430.F20150813-1748
- Xiv: IBM_Storage_Driver_for_OpenStack_1.5.0-b641-D20150818-1239