IBM Platform LSF 9.1.3 Fix Pack 5 (388731) Readme File

Abstract

LSF Version 9.1.3 Fix Pack 5. This Fix Pack includes all fixed issues and solutions included in previous LSF Version 9.1.3 Fix Packs and addresses new issues fixed between 21 October 2015 and 3 February 2016. For detailed descriptions of the issues and solutions in this Fix Pack, refer to the LSF 9.1.3 Fix Pack 5 Fixed Bugs List (lsf9.1.3.5_fixed_bugs.pdf can be downloaded from Fix central via fix ID lsf-9.1.3.5-spk-2016-Feb-build388731).

Description

Readme documentation for IBM Platform LSF 9.1.3 Fix Pack 5 (388731) including installation-related instructions, prerequisites and co-requisites, and list of fixes.

The new issues addressed in LSF Version 9.1.3 Fix Pack 5:

ID

Fixed Date

Description

P101586

2016/01/29

LSF scheduler performance is slow when a job that is submitted with the -m option requires hundreds of hosts, and hundreds of hosts are configured for HOSTS in the queue.

P101582

2016/01/29

Jobs that are submitted from a floating client are rejected when the floating client is expired or the master LIM restarts, and the job submission command displays the following error message:
"Request from non-LSF host rejected"

P101578

2016/01/27

Jobs with an XOR in the resource selection string cannot be resized.

P101577

2016/02/02

LSF does not display esub error messages when using the ls_rexecve() or ls_rexecv() APIs to run jobs. LSF does not display esub error messages when using the ls_rexecve() or ls_rexecv() APIs to run jobs.

P101575

2016/01/18

For script jobs that catch the SIGINT and SIGTERM signals and normally exit by signal, if a job is killed using the bkill command, bjobs and bhist incorrectly displays the job exit information as "Exited by signal 14".

P101570

2016/01/29

When creating an advance reservation with several hosts for a long period of time, mbatchd memory usage increases significantly. However, if this advance reservation expires or is deleted, the unused memory is not released.

P101566

2016/01/29

If the length of certain parameter values in lsb.resources is longer than 4096 characters, mbatchd crashes when running badmin reconfig.
The following parameters are affected by this issue:
- QUEUES
- PER_QUEUE
- HOSTS
- USERS
- PROJECTS
- PER_PROJECT
- LIC_PROJECTS
- PER_LIC_PROJECT

P101557

2016/01/13

mbatchd dispatches jobs very slowly and show XDR errors in mbatchd log in MultiCluster lease mode.

P101548

2016/01/21

When ABS_RUNLIMIT=y is defined in lsb.params, the bqueues -l command incorrectly shows the run limit normalized to the host.

P101533

2016/01/14

If sbatchd is restarted, any jobs that were running on the machine before the restart are no longer killed when the jobs' wall clock times expire.

P101530

2016/01/05

When the execution cluster defines the MC_PLUGIN_UPDATE_INTERVAL parameter in lsb.params and defines several receive queues in lsb.queues, it takes too much time filtering hosts during each scheduling session if the average number of hosts in the host lists of all receive queues defined by HOSTS in lsb.queues is large.

P101512

2015/01/25

Memory leaks when querying job messages in the lsb_readjobinfo_cond API and the bjobs command.

P101511

2015/12/14

If cgroup is disabled, and you submit plenty of jobs, sbatchd calls to PIM might take over 10 seconds. The calls then time out and retry 3 times for each call. These multiple attempts result in sbatchd hanging and blocking communications between mbatchd and sbatchd.

P101503

2015/12/14

When a job's pre-execution script fails repeatedly, the run time shown in the stream file and in the bhist output are different.

P101501

2015/12/01

Modifying the LSB_SUB_MODIFY_FILE environment variable causes esub to stop working because LSB_SUB_MODIFY_FILE is reserved by LSF and users should not change it.

P101493

2015/12/03

Jobs without GPU requirements have access to GPU resources.
This fix prevents jobs without GPU requirements from accessing GPU resources. If at least one of the LSF GPU resources (ngpus_shared, ngpus_excl_p and ngpus_excl_t) is configured, LSF configures the following for jobs without GPU requirements :
1. Sets CUDA_VISIBLE_DEVICES="" for the job.
2. If the GPU enforcement feature is enabled, LSF creates a devices cgroup for the job and sets the GPU denied list for this cgroup. The GPU denied list includes all GPU resources on the host.

P101491

2015/11/17

When the IP address of an LSF floating client changes, the "bsub -K" command hangs on the floating client while waiting for a response from mbatchd. However, mbatchd cannot connect to the floating client because mbatchd saved the wrong IP address of the floating client and there are no error messages for this issue in the mbatchd log.

P101489

2015/11/24

When lim -t/-T is run, lim logs the following cluster manager information even if the log level is LSF_WARNING :
"The cluster manager is the invoker in debug mode".

P101478

2015/11/16

sbatchd incorrectly logs the following message for each job regardless of whether the job level temporary directory can be accessed:
"createJobTmpDir: Job level tmp directory is set to /tmp/.tmpdir"

P101445

2015/11/12

Every time the lsb_getjobdepinfo API is called, it leaves behind a socket file descriptor in CLOSE_WAIT state. As a result, the LSF services become unresponsive after lsb_getjobdepinfo is frequently called.

P101443

2015/11/19

If the parent job is killed while in pend or suspend status after running brequeue, the child job that depended on this job due to the condition "ended()" cannot run.

P101439

2015/11/11

After LSF_DAEMON_WRAP is enabled, the root sbatchd hangs if writing more data to the socket through esub than is defined in /proc/sys/net/core/vmem_max or /proc/sys/net/core/wmem_default on the execution host.

P101431

2015/10/23

When executing two tasks on a host and one is finished, the other task will exit after a period of time. The root cause of this issue is when one of the PIDs is gone, res stops sending rusage information to pam. This fix enables res to send rusage information even when one task is finished.

P101428

2015/10/24

When mbatchd automatically deletes the oldest lsb.stream.utc or lsb.status.utc file, mbatchd might core dump.
When the size of lsb.status file reaches to MAX_EVENT_STREAM_SIZE defined in lsb.params, mbatchd prints error message "mergeStatusFile: checkStatusFile(pathTo/stream/lsb.status) failed: No such file or directory" in mbatchd log incorrectly.

P101412

2015/11/05

Enhance the example for how to use lsb_resreq_getqueextsched() API.

P101397

2016/02/02

When an advanced reservation with a name is created, after all the jobs referring to this reservation finish and delete the reservation, creating another reservation with the same reservation name generates an error: "brsvadd: The specified reservation name is referenced by a job".

P101388

2015/11/12

After the PRE_EXEC or POST_EXEC process exits, the job remains running and does not exit.

P101288

2015/12/14

Jobs take longer to submit from float clients than from server hosts.

80996

2016/01/29

If a job has a runlimit set and is in the pending state when NEWJOB_REFRESH=Y, job runlimit is multiplied by cpu factor.

66209

2015/11/23

Enhance the mbatchd log for check job paramters, and MAX_EVENT_STREAM_FILE_NUMBER reached.


The new solutions in LSF Version 9.1.3 Fix Pack 5:

ID

Fixed Date

Description

P101565

2016/01/19

CUDA MPS (Multi-Process Service), formerly known as CUDA Proxy, is a feature that allows multiple CUDA processes to share a single GPU context with EXCLUSIVE_PROCESS and DEFAULT modes.
This fix allows LSF to start MPS for jobs that require GPUs with EXCLUSIVE_PROCESS and DEFAULT modes.
Enable this feature by defining the following parameter in lsf.conf:
LSB_START_MPS
Syntax
LSB_START_MPS=y|Y
Description
If set to y|Y, LSF starts CUDA MPS for the GPU jobs that require only GPUs with EXCLUSIVE_PROCESS or DEFAULT modes. If users need GPUs with EXCLUSIVE_THREAD mode, LSF does not start CUDA MPS for the GPU jobs.
Specify the LSB_START_JOB_MPS environment variable to override this parameter at the job level:
LSB_ START_ JOB_ MPS=y|Y|n|N
If LSF starts MPS for a job, LSF sets CUDA_MPS_PIPE_DIRECTORY instead of CUDA_VISIBLE_DEVICES. The GPU jobs communicate with MPS through a named pipe that is defined by CUDA_MPS_PIPE_DIRECTORY. The CUDA_MPS_PIPE_DIRECTORY is stored under the directory that is specified by LSF_TMPDIR. When job finishes, LSF removes the pipe.
If the cgroup feature is enabled, LSF also creates a cgroup for MPS under the job level cgroup.
The MPS Server supports up to 16 client CUDA contexts concurrently. This limitation is per user per job and means that MPS can only support up to16 CUDA processes at one time even if LSF allocated multiple GPUs. MPS cannot exit normally if GPU jobs are killed. The LSF cgroup feature can help resolve this situation.
The MPS function is supported by CUDA Version 5.5, or later.

RFC#4638
P101559

2016/01/20

1. RETAIN does not work properly with host-type guarantee policies.
The RETAIN[] keyword is configured in the LOAN_POLICIES parameter of a GuaranteedResourcePool section in the lsb.resources file.
When loaning is enabled for a guaranteed resource pool without a RETAIN policy, LSF might loan out all of the owned resources in the pool without keeping anything in reserve for owners.
If the RETAIN keyword is set, LSF attempts to keep a small buffer of owned resources idle. The size of the buffer is determined by the configured value of the RETAIN keyword. When owners need these resources, they can have immediate access to at least a portion of the resources that were held idle.
During normal operations, the RETAIN policy might be temporarily violated. For example, when an owner job is first dispatched on an idle host in the pool, the number of idle hosts might drop below the RETAIN value. When this happens, LSF stops loaning from the pool, allowing jobs to use owned hosts until the RETAIN policy is no lnger violated.
This fix resolves the issue where loaning would continue in some cases even if the RETAIN policy was violated. This means that the owners might not be able to access owned hosts.
2. Scheduler does not use the closed_busy host as a candidate host.
To resolve this issue, the fix introduces the BUSY keyword for the PREEMPT_FOR parameter in lsb.params to allow preemptive scheduling to use the closed_busy host for pending and migrated jobs.
PREEMPT_FOR
Syntax
PREEMPT_FOR=[GROUP_JLP] [GROUP_MAX] [HOST_JLU]
[LEAST_RUN_TIME] [MINI_JOB] [USER_JLP] [OPTIMAL_MINI_JOB][BUSY]
BUSY
Enable preemptive scheduling to preempt slots on the closed_busy host for all preemptive jobs except preemptive suspended jobs and only one job at a time can preempt a closed_busy host.
If the job cannot run in two minutes after making a reservation on the closed_busy host, the job preempts necessary slots on other candidate hosts or all preemptable slots on this closed_busy host.
After two minutes, if the closed_busy host is still the candidate host and there are no preemptable running jobs on this closed_busy host, this job remains pending without any reservations.
If the job cannot run in five minutes after making a reservation on the closed_busy host, the job no longer considers this closed_busy host as a candidate host.
Tip
To decrease load on the closed_busy host for running preemptive pending jobs, specify JOB_CONTROLS=SUSPEND[brequeue $LSB_JOBID] on preemptable queues.
To use the closed_busy host when submitting a high priority preemptive job with the "select" statement, avoid removing hosts from consideration based on load indices.
Default
0 (The parameter is not defined.)

RFE#77619
P101419

2015/10/31

After a job has been pending for X amount of time, it will be recalled to the submission cluster and then re-forwarded to an execution cluster. In a multi-cluster environment, jobs forwarded to the execution cluster have always been sorted based on the job forward time.
The problem is that the execution cluster then places the forwarded job at the bottom of the pending jobs list in the order it was forwarded, not in the order that the job was submitted on the submission cluster. Therefore, the other jobs that have not been pending as long, can actually get executed ahead of the recently re-forwaded job.
With the parameter MC_SORT_BY_SUBMIT_TIME set to Y in lsb.params, forwarded jobs are sorted based on the submission time. When the maximum rescheduled time has been reached and the pending jobs are rescheduled on the execution cluster, they are ordered based on their original submission time (the time when the job was first submitted on the submission cluster) and not the forwarding time (the time when the job was re-forwarded to the execution cluster). Jobs forwarded to the execution cluster using brequeue -a or brequeue -p are also sorted based on the submission time.
Note : This solution does not change the behaviour of bswitch, bbot or btop.
i. After bswitch, LSF forwards the job based on job switch time instead of the submission time.
ii. Using btop or bbot on the submission cluster does not affect the position of the jobs at the execution cluster or future forwarding.
iii. Users can use btop to move the job to the top of the queue at the execution cluster but after the job is recalled and forwarded again, LSF orders the job based on the original submission time and the previous btop position is not used.


The fixed issues and solutions included in previous LSF Version 9.1.3 Fix Packs can be found in lsf9.1.3.5_fixed_bugs.pdf.

Readme file for: IBM® Platform LSF

Product/Component Release: 9.1.3

Update Name: Fix 388731

Fix ID: lsf-9.1.3.5-spk-2016-Feb-build388731

Publication date: 29 April 2016

Last modified date: 25 April 2016

Contents:

1.     List of fixes

2.     Download location

3.     Products or components affected

4.     System requirements

5.     Installation and configuration

6.     List of files

7.     Product notifications

8.     Copyright and trademark information

 

1.   List of fixes

P101586, P101582, P101578, P101577, P101575, P101570, P101566, P101557, P101548, P101533, P101530, P101512, P101511, P101503, P101501,
P101493, P101491, P101489, P101478, P101445, P101443, P101439, P101431, P101428, P101412, P101397, P101388, P101288, 80996 (No APAR),
66209 (No APAR), P101565, P101559, RFE#77619, RFE#85886, RFE#82743

2.   Download Location

Download Fix 388731 from the following location: http://www.ibm.com/eserver/support/fixes/

3.   Products or components affected

Components affected by the new issues addressed in LSF Version 9.1.3 Fix Pack 5 include:
LSF/bsub
LSF/mbschd
LSF/mesub
LSF/bjobs
LSF/bhist
LSF/mbatchd
LSF/sbatchd
LSF/bqueues
LSF/schmod_preemption.so
LSF/liblsf.so
LSF/liblsf.a
LSF/libbat.so
LSF/libbat.a
LSF/lsf.h
LSF/lsbatch.h
LSF/pim
LSF/res
LSF/lim
LSF/daemons.wrap
LSF/liblsbstream.so
LSF/matchexample.c
LSF/allocexample.c
LSF/bparams
LSF/schmod_default.so
LSF/schmod_parallel.so
LSF/schmod_fairshare.so
LSF/schmod_advrsv.so
LSF/schmod_affinity.so
LSF/schmod_aps.so
LSF/schmod_bluegene.so
LSF/schmod_cpuset.so
LSF/schmod_craylinux.so
LSF/schmod_crayx1.so
LSF/schmod_dc.so
LSF/schmod_dist.so
LSF/schmod_fcfs.so
LSF/schmod_jobweight.so
LSF/schmod_limit.so
LSF/schmod_maui.so
LSF/schmod_mc.so
LSF/schmod_pset.so
LSF/schmod_ps.so
LSF/schmod_rms.so
LSF/schmod_reserve.so
LSF/schmod_xl.so

 

4.   System requirements

Linux2.6-glibc2.3-x86_64
Linux3.10-glibc2.17-ppc64le

 

5.   Installation and configuration

 

5.1          Before installation

 

 LSF_TOP=Full path to the top-level installation directory of LSF.

1)    Log on to the LSF master host as root

2)    Set your environment:

-      For csh or tcsh: % source LSF_TOP/conf/cshrc.lsf

-      For sh, ksh, or bash: $ . LSF_TOP/conf/profile.lsf

 

5.2          Installation steps

 

1)    Go to the patch install directory: cd $LSF_ENVDIR/../9.1/install/

2)    Copy the patch file to the install directory $LSF_ENVDIR/../9.1/install/

3)    Run
badmin hclose all
badmin qinact all

4)    Run patchinstall: ./patchinstall <patch>

 

5.3          After installation

 

1)    Run
badmin hshutdown all
lsadmin resshutdown all
lsadmin limshutdown all

2)    Run
lsadmin limstartup all
lsadmin resstartup all
badmin hstartup all

3)    Run
badmin hopen all
badmin qact all

 

5.4          Uninstallation

 

To roll back a patch:

1)    Log on to the LSF master host as root

2)    Set your environment:

-      For csh or tcsh: % source LSF_TOP/conf/cshrc.lsf

-      For sh, ksh, or bash: $ . LSF_TOP/conf/profile.lsf

3)    Run
badmin hclose all
badmin qinact all

4)    Run ./patchinstall -r <patch>

5)    Run
badmin hshutdown all
lsadmin resshutdown all
lsadmin limshutdown all

6)    Run
lsadmin limstartup all
lsadmin resstartup all
badmin hstartup all

7)    Run
badmin hopen all
badmin qact all

6.   List of files in package

 

bsub
mbschd
mesub
bjobs
bhist
mbatchd
sbatchd
bqueues
schmod_preemption.so
liblsf.so
liblsf.a
libbat.so
libbat.a
lsf.h
lsbatch.h
pim
res
lim
daemons.wrap
liblsbstream.so
matchexample.c
allocexample.c
bparams
schmod_default.so
schmod_parallel.so
schmod_fairshare.so
schmod_advrsv.so
schmod_affinity.so
schmod_aps.so
schmod_bluegene.so
schmod_cpuset.so
schmod_craylinux.so
schmod_crayx1.so
schmod_dc.so
schmod_dist.so
schmod_fcfs.so
schmod_jobweight.so
schmod_limit.so
schmod_maui.so
schmod_mc.so
schmod_pset.so
schmod_ps.so
schmod_rms.so
schmod_reserve.so
schmod_xl.so

 

7.   Product notifications

To receive information about product solution and patch updates automatically, subscribe to product notifications on the My notifications page (www.ibm.com/support/mynotifications) on the IBM Support website (support.ibm.com). You can edit your subscription settings to choose the types of information you want to get notification about, for example, security bulletins, fixes, troubleshooting, and product enhancements or documentation changes.


8.   Copyright and trademark information

© Copyright IBM Corporation 2016

U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

IBM®, the IBM logo and ibm.com® are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.