=============================================================================== Readme file for: IBM Platform LSF & OpenStack resource connector Product/Component Release: 9.1.3 Publication date: 30 December 2015 Last modified: 30 December 2015 Abstract: This patch enables LSF to launch instances from OpenStack. =============================================================================== ========================= CONTENTS ========================= 1. Prerequisites 2. About IBM Platform LSF & OpenStack resource connector 3. Supported versions 4. Installation and configuration 5. Using the OpenStack resource connector 6. Logging and troubleshooting 7. References 8. Notes 9. Copyright ========================= 1. Prerequisites ========================= * Have root access to the LSF master host. * Have privileges to update the DNS server. * Be able to restart the LSF cluster. * OpenStack Liberty is already installed. Refer to http://docs.openstack.org/liberty/ * Be familiar with and have the ability to perform OpenStack administrative operations. * The virtual network to be used by OpenStack virtual instances must be configured so that they can communicate with LSF hosts. ========================= 2. About IBM Platform LSF & OpenStack resource connector ========================= This feature enables LSF clusters to launch instances from OpenStack to satisfy pending workload. The instances will join the LSF cluster. If instances become idle, LSF resource connector terminates them. ========================= 3. Supported Versions ========================= The integration package has been tested and works with the following: - Linux2.6-glibc2.3-x86_64 - LSF 9.1.3 Standard Edition - OpenStack Liberty (tested on CentOS 7) ========================= 4. Installation and Configuration ========================= ------------------------- 4.1. OpenStack Services ------------------------- The following major OpenStack services are required for IBM Platform LSF & OpenStack resource connector: - Identity service (Keystone) - Image service (Glance) - Compute service (Nova) - Networking service (Neutron) ------------------------- 4.2. Configure OpenStack ------------------------- For the steps in this section, perform all operations as the OpenStack administrator unless otherwise stated. 4.2.1. Create projects, users, and roles for LSF ------------------------- This step does not require OpenStack administrator privileges. Create a non-administrator project, user, and role for LSF. The LSF user must have access to resources such as images and networks created by the administrator. 4.2.2. Modify security group for LSF ------------------------- Add rules to open all LSF listening ports to the security groups that are used to launch instances. The ports must match those from the existing LSF cluster. Default port number values are as follows: * LSF_LIM_PORT=7869 (TCP and UDP) * LSF_RES_PORT=6878 (TCP) * LSB_SBD_PORT=6882 (TCP) 4.2.3. Build LSF cloud image ------------------------- The LSF cloud image is a single file that contains a virtual disk image with the following: * Bootable Linux2.6-glibc2.3-x86_64 * LSF 9.1.3 Standard Edition installed * cloud-init installed 1) Create the virtual machine image. Refer to the following for more details on downloading an official image (for example, CentOS-6-x86_64-GenericCloud.qcow2): http://docs.openstack.org/image-guide/ Upload the image file to the Image service (Glance). 2) Launch an instance with the image. LSF installation requires root access in the OpenStack instances. However, the official images do not have root enabled. Therefore, you must enable root when launching the OpenStack instances. The following is an example of a userdata script for the CentOS image. Launch an instance with this script to enable root. #!/bin/bash if [ ! -f /etc/sudoers.d/90-cloud-init-users ]; then ( cat < /etc/sudoers.d/90-cloud-init-users chmod 440 /etc/sudoers.d/90-cloud-init-users else echo "/etc/sudoers.d/90-cloud-init-users exists" > /root/cloud-init-user.log cat /etc/sudoers.d/90-cloud-init-users >> /root/cloud-init-user.log fi 3) Configure the instance. a) Log in to the instance. Log in to the instance as the default cloud user and switch to root with "sudo su -". b) Set up firewall communications. Modify the instance's firewall to open all LSF listening ports. The ports must match those from the existing LSF cluster. Default port number values are as follows: * LSF_LIM_PORT=7869 (TCP and UDP) * LSF_RES_PORT=6878 (TCP) * LSB_SBD_PORT=6882 (TCP) 4) Install LSF on the instance. a) Log in to the instance. Log in to the instance by default cloud user, switch to root with "sudo su -". b) Install LSF with install.config parameters. Install LSF as a dynamic host and make sure that the LSF installation directory is the same as the LSF master host. For more details on installing LSF, refer to the following: http://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.3/lsf_unix_install/lsf_installnewunix_deploycluster_tsk.dita?lang=en The following install.config parameters are required to enable dynamic hosts: * ENABLE_DYNAMIC_HOSTS * LSF_DYNAMIC_HOST_WAIT_TIME To ensure the smooth operation of the integration, set LSF_DYNAMIC_HOST_WAIT_TIME to a small value such as 1 or 2 seconds. For example, specify the following parameters in the install.config file: LSF_TOP="/usr/share/lsf" LSF_ADMINS="lsfadmin root" LSF_CLUSTER_NAME="cluster1" LSF_MASTER_LIST="hostm hostd" LSF_ENTITLEMENT_FILE="/root/platform_lsf_std_entitlement.dat" LSF_TARDIR="/root/" ENABLE_DYNAMIC_HOSTS="Y" LSF_DYNAMIC_HOST_WAIT_TIME="2" c) Update the LSF configuration parameters. After installation, update the LSF_LIM_PORT parameter in the /conf/lsf.conf file. The port number must the be same as the one defined on the LSF master host. For example, LSF_LIM_PORT=7869 Update the LSF configuration to synchronize the cluster configuration with the master host. Define the LSF_GET_CONF parameter in the /conf/lsf.conf file: LSF_GET_CONF=lim d) Configure the instance to provide a new boolean resource. The new resource name is used by LSF to identify OpenStack instances. Instances used by LSF are omitted from bhosts output unless the '-a' option is specified. To provide a new resource, set the LSF_LOCAL_RESOURCES parameter in the /conf/lsf.conf file. For example, to provide a new resource "openstackhost" on each shared host: LSF_LOCAL_RESOURCES="[resource openstackhost]" The following examples use "openstackhost" to refer to this newly-created resource, but you can specify any name. For more details, refer to the following: https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.3/lsf_admin/resources_add_new.dita?lang=en e) Log out from the instance. 5) Snapshot the instance. Create snapshot for the instance and ensure that this newly-created image can be accessed by LSF projects. ------------------------- 4.3. Install IBM Platform LSF & the OpenStack resource connector ------------------------- If you do not have an existing LSF cluster, follow the appropriate Platform LSF installation guide to install LSF. Only LSF Standard Edition is supported. Steps to install the resource connector: 1) Log in to the LSF master host as root. 2) Install Python, Version 2.6.6, or later. 3) Source the LSF environment. - For csh or tcsh: % source LSF_TOP/conf/cshrc.lsf - For sh, ksh, or bash: $ . LSF_TOP/conf/profile.lsf 4) Install the resource connector. a) Copy the patch file to the LSF master host. b) Check the installation: $LSF_ENVDIR/..//install/patchinstall -c c) Run patchinstall to install the patch: $LSF_ENVDIR/..//install/patchinstall Note: To roll back this patch installation: a) Run $LSF_ENVDIR/..//install/patchinstall -r b) Revert any changes to $LSF_ENVDIR/lsbatch//configdir/lsb.modules c) Run badmin mbdrestart to apply the changes. ------------------------- 4.4. Configure resource connector ------------------------- Modify the resource connector configuration files after installation. 1) Update the OpenStack administrative configuration. Change the following parameters in the //resource_connector/openstack/conf/osprov_config.json file to enable the resource connector to connect to OpenStack: * LSFMasterIP * OS_USERNAME * OS_PASSWORD * OS_AUTH_URL * OS_PROJECT_NAME * OS_NETWORK_NAME The OS_USERNAME, OS_PASSWORD and OS_PROJECT_NAME are LSF user credentials created in 4.2.1 For more details on these parameters, refer to Section 7 in this README. 2) Create templates. Create at least one template in the //resource_connector/openstack/conf/osprov_templates.json file. For more details, refer to the osprov_templates.json example in section 7 of this README. Ensure that your template accurately defines at least the following attributes: * ncpus * openstackhost The following is a minimal example for hosts with 4 CPUs: { "Templates": [ { "Name": "TemplateA", "Attributes": { "ncpus": ["Numeric", "4"], "openstackhost": ["Boolean", "1"] }, "Image": "CentOS-6.x-LSF-x86_64", "Flavor": "m1.large", "MaxNumber": "10", }, ] } ------------------------- 4.5. Update the LSF Configuration ------------------------- 1) Log in to the LSF master host as root. 2) Update the DNS settings. When the resource connector launches an instance with IP address aa.bb.cc.dd, the instance is assigned a host name of the form: -aa-bb-cc-dd where is a configurable parameter in the //resource_connector/openstack/conf/osprov_config.json file, the default value is "host". For example, the hostname is "host-192-168-100-1" for instance with IP address 192.168.100.1 To generate host names for instances, run the following command: //resource_connector/openstack/scripts/generatehosts --subnet --domain , to generate host names for instances. For example, $ scripts/generatehosts --subnet 10.110.135.192/26 --domain openstack.domain 10.110.135.193 host-10-110-135-193.openstack.domain host-10-110-135-193 10.110.135.194 host-10-110-135-194.openstack.domain host-10-110-135-194 10.110.135.195 host-10-110-135-195.openstack.domain host-10-110-135-195 ..... Review the generated host names, manually append the host name and IP address in the /etc/hosts file on the LSF master host. 3) Configure the demand calculation scheduler module. Modify the $LSF_ENVDIR/lsbatch//configdir/lsb.modules file to add the new module "schmod_demand" to the PluginModule section. For example, Begin PluginModule SCH_PLUGIN RB_PLUGIN SCH_DISABLE_PHASES . . . schmod_demand () () End PluginModule 4) Define resources to identify instances Section 4.2.4, Step 4 describes how to define a new "openstackhost" resource. You must add this new resource to the LSF resource list as a new boolean resource. Section 4.4, Step 2 describes how to create several templates. Add the "template" string resource, as this is used by jobs to request particular templates. On the LSF master host, modify the /conf/lsf.shared file to add new resources to the Resource section as follows: Begin Resource RESOURCENAME TYPE INTERVAL INCREASING DESCRIPTION . . . openstackhost Boolean () () (instances from OpenStack) template String () () (Template name) End Resource 5) Enable the resource connector and restart the LSF cluster. Define the LSF_EXTERNAL_HOST_FLAG parameter in the /conf/lsf.conf file to enable the resource connector feature. You must define the value of this parameter as the resource name that you created in Step 4. For example, LSF_EXTERNAL_HOST_FLAG=openstackhost 6) Restart LSF daemons on the master host for the changes to take effect. lsadmin limrestart lsadmin resrestart badmin mbdrestart ========================= 5. Using the OpenStack resource connector ========================= ------------------------- 5.1. Check resource connector status ------------------------- On the LSF master host, verify that the 'ebrokerd' process is running after enabling the resource connector. # ps -ef | grep ebrokerd ------------------------- 5.2. Submit jobs to launch instances from OpenStack ------------------------- In this section, "host-10-110-135-193" is a sample instance from OpenStack. 1) Use bsub to submit jobs that require instances launched from OpenStack. The following bsub command with no options submits a job that will trigger a launch demand when there are no available resources in the LSF cluster: # bsub myjob Alternatively, you can use the "openstackhost" resource in its select[] requirement. Since the "openstackhost" resource is defined in a template as a boolean attribute, it will trigger a launch demand: # bsub -R "select[openstackhost]" myjob You can also use the template name as resource in its select[] requirement. It will trigger a launch demand from the specified template. # bsub -R "select[template == TemplateA]" myjob 2) Use bhosts to monitor instances. The status of the instances become "ok" when they join the LSF cluster as dynamic hosts. # bhosts -a HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV lsfmaster ok - 1 0 0 0 0 0 host-10-110-135-193 ok - 1 1 1 0 0 0 Verify that the job runs on host-10-110-135-193. 3) Use bhosts to monitor the status of the instances. Run bhosts with "-a" option, which shows all hosts including terminated instances. If an instance from OpenStack has no jobs running on it in LSF_EXTERNAL_HOST_IDLE_TIME minutes, it is relinquished and its host status changes to "closed", then "unavail" when the instance is terminated. # bhosts -a HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV lsfmaster ok - 1 0 0 0 0 0 host-10-110-135-193 unavail - 1 0 0 0 0 0 If an instance is in the cluster over LSF_EXTERNAL_HOST_MAX_TTL minutes, it is closed and any jobs running on the instance is allowed to run to completion. Once the instance is idle, it is terminated and its status becomes "unavail". # bhosts -a HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV lsfmaster ok - 1 0 0 0 0 0 host-10-110-135-193 closed - 1 1 1 0 0 0 4) External job submission and execution controls To control job submissions such as permission checks before launching instances from OpenStack, you can set up a external submission (esub) script. For more details, refer to the following: https://www-01.ibm.com/support/knowledgecenter/SSETD4_9.1.3/lsf_admin/chap_sub_exec_controls_lsf_admin.html?lang=en ========================= 6. Logging and troubleshooting ========================= ------------------------- 6.1 Log files for LSF ------------------------- To change the log level or log classes for LSF, update the following parameters in the /conf/lsf.conf file: * LSF_LOG_MASK * LSB_DEBUG_MBD * LSB_DEBUG_EBROKERD For example, the log level is set to LOG_INFO, and the debugging log class for mbatchd and ebrokerd is set to LC2_COMM: LSF_LOG_MASK=LOG_INFO LSB_DEBUG_MBD="LC2_COMM" LSB_DEBUG_EBROKERD="LC2_COMM" ------------------------- 6.2 Log files for the resource connector ------------------------- To change the log level for the resource connector, update the LogLevel parameter in the //resource_connector/openstack/conf/osprov_config.json file. For example, to set the log level to INFO: { "LogLevel": "INFO", } Log files for the resource connector can be found in the//resource_connector/openstack/log directory. ------------------------- 6.3 Persistent files for the resource connector ------------------------- The resource connector saves some state information for synchronization with LSF after failover or restart. This information is saved in persistence files, which are in the $LSB_SHAREDIR//resource_connector/ directory. ========================= 7. Resource Connector Configuration Reference ========================= ------------------------- osprov_config.json ------------------------- The osprov_config.json file contains administrative settings for the resource connector. It is used to invoke remote OpenStack services, such as creating virtual instances. The osprov_config.json file is located in //resource_connector/openstack/conf/ The parameters for this file are as follows: LogLevel -------- Required. The log level for this host provider. Valid log levels are: INFO, DEBUG, WARN. LSFMasterIP -------- Required. The IP address of the LSF master host. InstancePrefix -------------- Required. Prefix for the OpenStack instance host name. Default value is "host". OS_USERNAME ----------- Required. The LSF user name in OpenStack. OS_PASSWORD ----------- Required. The LSF user password in OpenStack, used by the resource connector to authenticate with OpenStack as an administrator. Configure the password as plain text. OS_AUTH_URL ----------- Required. The OpenStack authentication URL. The resource connector uses this URL to authenticate with the OpenStack identify server. OS_USER_DOMAIN_ID ----------------- Optional. Domain ID, introduced in Identify API version 3. Uses "default" if not specified. OS_PROJECT_NAME --------------- Required. The LSF project name in OpenStack with which instances are created. OS_KEYPAIR ---------- Optional. The OpenStack Key pair name to log in to instances via ssh. If not specified, you will not be able to log in to the instances. OS_SECURITYGROUPS ----------------- Optional. A list of strings specifying OpenStack security groups that are applied to instances. If not specified, OpenStack uses the "default" group. OS_NETWORK_NAME --------------- Required. OpenStack network name that is attached to instances. Use the network through which the instance can communicate with the LSF cluster. The following is an example configuration of the osprov_config.json file: { "LogLevel": "INFO", "LSFMasterIP": "119.81.183.212", "InstancePrefix": "host", "OS_USERNAME": "LSF", "OS_PASSWORD": "Letmein123", "OS_AUTH_URL": "http://119.81.183.245:5000/v3", "OS_USER_DOMAIN_ID": "default", "OS_PROJECT_NAME": "LSF", "OS_KEYPAIR": "LSFKey", "OS_SECURITYGROUPS": [ { "name": "default" } ], "OS_NETWORK_NAME": "public" } ------------------------- osprov_templates.json ------------------------- The osprov_templates.json file is the primary way to define the mapping between LSF resource demand requests and OpenStack instances. The template represents a set of hosts that share some attributes in common such as the number of CPUs, the amount of available memory, the installed software stack, operating system, and other attributes. LSF requests resources from the resource connector by specifying the number of instances of a particular template that it requires to satisfy its demand. The resource connector uses the definitions in this file to map this demand into a set of allocation requests in OpenStack. The osprov_templates.json file is located in the //resource_connector/openstack/conf/ directory. The file contains a JSON-defined list called 'templates'. Each templates list is an object containing the following parameters: Name ---- Required. The unique template name, for example, TemplateA. Attributes ---------- Required. A list of attributes representing the hosts in the template from the LSF point of view. LSF attempts to place its pending workload on hypothetical hosts matching these attributes in order to calculate how many instances of each template to request. The format of each attribute string in the list is "": ["", ""] "" is the LSF resource name, (for example, "type" or "ncores"). The attribute name must either be a built-in resource (such r15s or type) or defined in the Resource section in the /conf/lsf.shared file on the LSF master host. "" can be either 'Boolean', 'String', or 'Numeric' and must correspond to the corresponding resource definition in lsf.shared. "" is the value of the resource provided by hosts. For Boolean resources, use '1' to define the presence of the resource and '0' to define its absence. For Numeric resources, specify a range using '[min:max]' Notes: * "type" attribute: Default value is LSF_RESOURCE_CONNECTOR_DEFAULT_HOST_TYPE (see in lsf.conf). * "ncpus" attribute: Default value is 1. Image ----- Required. The image name used to launch virtual instances. Flavor ------ Required. The flavor name used to launch virtual instances. MaxNumber --------- Required. That maximum number of instances to provide. Set the MaxNumber to an appropriate value according to the instance quota of the LSF project. UserScript ---- Optional. User-written script that is sent to the instance and executed when the instance starts up. You can create this script for various operations such as to mount volumes or install packages. UserData -------- Optional. A string representing a list of keys and their values The format of UserData is =;= is key name of UserData, such as "volumes" or "packages". The key is converted to upper case by the resource connector. is list of UserData values separated by comma, for example, "volume1, volume2", "package1, package2". Once UserData is defined, it is divided into keys/values and exported to the instance's environment variables. For example, if UserData is defined as: volumes=X,Y;mount_points=/share1,/share2;packages=M,N It is exported as the following environment variables in the instances: VOLUMES=X,Y MOUNT_POINTS=/share1,/share2 PACKAGES=M,N These variables can be read by the UserScript in the instance as keys: "VOLUMES", "MOUNT_POINTS", and "PACKAGES" . Notes: * "volumes" key: - volumes is a list of volume names that are attached to the instance. If the template has volumes that are defined in UserData, the MaxNumber for this template must be set to 1, since volume can only be attached to one instance at a time. - Each volume can only be defined in one template. - mount_points must also be defined to specify the target mount directory with volumes. The following is an example osprov_templates.json file: { "Templates": [ { "Name": "TemplateA", "Attributes": { "type": ["String", "X86_64"], "ncpus": ["Numeric", "4"], "mem": ["Numeric", "480"], "maxmem": ["Numeric", "512"] "openstackhost": ["Boolean", "1"] }, "Image": "CentOS-6.x-LSF-x86_64", "Flavor": "m1.small", "MaxNumber": "1", "UserData": "volumes=volume1,volume2;mount_points=/share1,/share2;packages=package1,package2" "UserScript": "scripts/userscript.sh" }, ] } The example defines a template named 'TemplateA'. LSF attempts to place any pending workload on hypothetical hosts of type X86_64 with ncpus=4, mem=480, and maxmem=512. If it successfully places some of its pending workload on N such hosts, it request N instances of TemplateA to the resource connector. The connector logic, in turn, attempts to allocate N hosts with the configured image and flavor in the OpenStack. If it succeeds to obtain any instances (even if there are fewer than requested), the resource connector informs LSF that it may use the instances. In this example, the template also defines the 'openstackhost' resource. Therefore, users can ensure that their jobs generate demand for OpenStack resources by using 'select[openstackhost]' in their LSF job submission's resource requirement strings. After creating the instance, OpenStack attaches the "volume1" volume and installs the "package1" and "package2" packages to the instance. The volume and package list is read from the environment variables "VOLUMES", "MOUNT_POINTS", and "PACKAGES" "scripts/userscript.sh" is shipped into the instance and executed during instance startup. The following script is an example of "scripts/userscript.sh" in the CentOS 6 image. It reads environment variables and mounts each volume to the specified mount point. #!/bin/bash LOG_FILE=/root/userscript.log VOLUMES="$VOLUMES," MOUNT_POINTS="$MOUNT_POINTS," while [ -n "$VOLUMES" ]; do volume=${VOLUMES%%,*} VOLUMES=${VOLUMES#*,} mnt=${MOUNT_POINTS%%,*} MOUNT_POINTS=${MOUNT_POINTS#*,} if [ -z "$mnt" ]; then echo "No mount point for volume $volume" >> $LOG_FILE else if [ -e "$mnt" ] || mkdir -p "$mnt" >> $LOG_FILE 2>&1; then echo "Mount volume $volume to $mnt ..." >> $LOG_FILE mount "/opt/osprovider/$volume" "$mnt" >> $LOG_FILE 2>&1 || echo "Failed to mount volume $volume to $mnt" >> $LOG_FILE else echo "Failed to create $mnt" >> $LOG_FILE fi fi done IMPORTANT: When defining templates, you must ensure that the attribute definitions presented to LSF accurately match those provided by OpenStack. If, for example, the attribute definition specifies hosts with ncpus=4, but the actual hosts returned by OpenStack report ncpus=2, LSF's demand calculation will not be an accurate. ------------------------- lsf.conf ------------------------- Add the following parameters and their values in /conf/lsf.conf for resource connector. LSF_EXTERNAL_HOST_FLAG ------------------------- Syntax LSF_EXTERNAL_HOST_FLAG = String ... Description Setting this parameter enables the overall resource connector feature. Specify a list of String resource names that identify OpenStack instances. Any instances providing a resource from the list is initially closed by LSF at startup, and only opened when the resource connector informs LSF that the instances have been allocated. Run badmin mbdrestart to make changes take effect. Example LSF_EXTERNAL_HOST_FLAG=openstackhost Default Not defined LSB_RSRC_CONNECTOR_DEFAULT_HOST_TYPE ------------------------- Syntax LSB_RSRC_CONNECTOR_DEFAULT_HOST_TYPE = string Description Specifies the default host type to use for a template if the 'type' attribute is not defined on a template in the osprov_templates.json file. Default X86_64 LSF_HF_LOOP_INTERVAL ------------------------- Syntax LSF_HF_LOOP_INTERVAL = seconds Description The interval in seconds that resource connector checks host status and asynchronous request results from OpenStack. Run badmin mbdrestart to make changes take effect. Default 30 seconds LSF_EXTERNAL_HOST_MAX_TTL ------------------------- Syntax LSF_EXTERNAL_HOST_MAX_TTL = minutes Description Maximum time-to-live for a OpenStack instance. If an instance is in the cluster for the number of minutes, we will close it (status goes to closed_HF). After that, if it ever goes idle we will terminate it. The default value is 0, which means "infinite" (i.e. the instance is never closed or relinquished due to these time-based policies). Example: LSF_EXTERNAL_HOST_MAX_TTL=30 Default 0 minutes, which means "infinite" LSF_EXTERNAL_HOST_IDLE_TIME ------------------------- Syntax LSF_EXTERNAL_HOST_IDLE_TIME = minutes Description If an OpenStack instance has no jobs run on it for this number of minutes, LSF terminates the instance. Example: LSF_EXTERNAL_HOST_IDLE_TIME=60 Default 60 minutes ========================= 8. Notes ========================= 1) Do not create advanced reservations on instances as these may be terminated after idle time. If advanced reservations are created on instances, they remain active if the instances are destroyed. However, jobs will not be able to run on the instance since the LSF daemons are shut down on terminated instances and the jobs will become unavailable. 2) Currently, the host name resolution as shown above only supports master-compute resolution. It does not support master-compute and compute-compute resolution that is required for parallel jobs. ========================= 9. Copyright ========================= Copyright IBM Corporation 2015 U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM, the IBM logo and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.