IBM
Platform MPI
9.1.2 Fix Pack 1 Readme File
Description
Readme
documentation for IBM ® Platform™
MPI 9.1.2 Fix Pack 1 including installation-related instructions,
prerequisites, and list of fixes.
Readme
file for:
IBM® Platform™ MPI
Product/Component
Release:
9.1.2
Update
Name: Fix pack 1
Fix
ID:
Platform_MPI_09.01.02.01
Publication
date: 20 February 2014
Last
modified date: 20 February 2014
Contents:
- Product website. 1
- Products or components affected. 1
- System requirements. 1
- Installation and configuration. 2
- List of changes. 3
- Copyright and trademark information. 4
View the IBM
® Platform™ MPI 9.1.2 Fix Pack 1
website at the following location: http://www.ibm.com/systems/technicalcomputing/platformcomputing/products/mpi/index.html.
IBM® Platform™ MPI
3.1
IBM® Platform™ MPI for Linux
- Intel/AMD
x86-32-bit, AMD Opteron, and EMT64
servers
-
CentOS 5, Red Hat Enterprise Linux AS 4, 5, and 6, and SuSE
Linux Enterprise Server 9, 10, and 11 operating systems.
-
A minimum of 49 MB of disk space in the
installation directory (/opt), and a minimum
of 120 MB of disk space in /tmp during installation.
3.2
IBM®
Platform™ MPI for Windows
- Intel/AMD
x86-32-bit, AMD Opteron, and EMT64
servers
- Operating system:
- 64-bit Windows HPC
Server 2008 or
- 32- or 64-bit Windows 7, Server 2003, Server 2008, Vista, XP.
- A minimum of 50 MB of disk space in the
installation directory (C:\Program Files (x86)), and a minimum of 120 MB of temporary space in the local disk during
installation.
- On any non-HPCS system that does not have LSF
installed, all systems which will be used to run or submit IBM® Platform™ MPI jobs must be members of the same active
directory domain. You must install and start the IBM®
Platform™ MPI Remote Launch service to
run on all Windows 2008 systems except Windows HPC
Server 2008 and Windows LSF. The service is not required to run in SMP (single system) mode.
4.1 Before
installation
None.
4.2 Installation
steps
IBM® Platform™ MPI for Linux:
IBM® Platform™ MPI must be installed on all machines in the same
directory or be accessible through the same shared network path. The following
describes the process of using the shell archive-based installer to install the
product using the RPM database
- Place the downloaded file into the /tmp directory.
- Run the installer with superuser privileges.
For example,
sudo ./platform_mpi-09.01.02.01r.x64.bin
Verifying
archive integrity... All good.
Uncompressing
platform_mpi-09.01.02.01r.x64.bin......
<End
User License Agreement>
Press
Enter to continue viewing the license agreement, or
enter "1" to accept the agreement,
"2" to decline it, "3"
to print it, "4" to read non-IBM terms, or "99" to go back
to the previous screen.
- Review the license agreement, then select option
1 to accept the agreement and continue the installation.
The license agreement is available in the $MPI_ROOT/EULA directory after
completing the installation.
For additional
installer command options, refer to the Installation
information section of the Release
Notes for IBM ® Platform™ MPI:
Linux (Version 9.1).
IBM® Platform™ MPI for Windows:
To use the standard
interactive installation, double-click PlatformMPI-09.1.2.1r.w64.exe and follow the
onscreen prompts to install the product.
For other options,
including invoking the installer from the command line to enable unattended
installation from a command window, refer to the Installation instructions section of the Release Notes for IBM ®
Platform™ MPI: Windows (Version 9.1).
4.3 After
installation
None.
4.4 Uninstalling
None.
- Microsoft HPC Servers require the 32bit Microsoft
Visual C++ 2008 Redistributable Package (x86) to load libwlm-hpc.dll
and libhpc.dll at runtime. Please ensure all
Windows HPC compute nodes and Windows HPC client nodes used to submit
jobs to the HPC cluster have Microsoft Visual C++ 2008 Redistributable Package (x86) installed.
You can download Microsoft Visual C++ 2008 Redistributable Package (x86) from: http://www.microsoft.com/en-us/download/details.aspx?id=29
- Resolved an issue with the "IsExclusive" bit for pcmpiccpservice.exe on
Windows HPC Server 2008 R2. By default, the Platform MPI services on Windows HPC are
launched with the "IsExclusive" bit set. This
allows individual ranks to inherit the full CPU affinity mask assigned by the
Windows HPC scheduler for all processes on that node. However, when the "IsExclusive" bit is set, other jobs cannot start successfully
until the first job completes.
Windows HPC Server 2008 R2 includes
an enhancement that avoids the need to set the "IsExclusive"
bit, while still allowing the ranks to use the appropriate CPU affinity mask
allocated for the job. This change
happens automatically when Windows HPC Server 2008 R2 is detected as the job
scheduler.
- Resolved an issue launching MPMD applications
on Windows, where Windows applications that start more than one different executable on a
single host (such as MPMD programs) will hang on job startup. There is no
known workaround for this issue.
- Resolved an issue with MPI_TMPDIR paths that
contain 64 or more characters. The error message associated with this issue was
as follows:
Error in cpu affinity, during shared memory setup, step2.
The workaround for this
issue was to set MPI_TMPDIR to a shorter path.
- Added an alias to reduce maximum possible
pinned memory footprint.
The alias "-cmd=pinmemreduce" will set
the following options:
-e MPI_PIN_PERCENTAGE=50
-e
MPI_RDMA_MSGSIZE=16384,16384,1048576
-e
MPI_RDMA_NFRAGMENT=64
The calculation for maximum
possible pinned memory footprint is very conservative. On some 16 core/node machines, with job sizes
larger than 512 ranks, the calculation can exceed the physical memory installed
on a system. The typical error message
begins with the following:
ERROR: The total amount of memory
that may be pinned (# bytes), is insufficient to support even minimal rdma network transfers.
This alias reduces the
maximum pinned fragment size, the maximum number of pinned fragments, and the
maximum percentage of physical memory that can be pinned. For messages larger than 1MB, there may be a minor performance degradation with these settings.
- Fixed an issue using Platform MPI with LSF
9.x and later. LSF 9.1.1.0 includes some enhancements that required a
change in the APIs for interacting with the job scheduler. By default, Platform MPI uses the pre-9.x
APIs to better support the current installed base of LSF users.
LSF 9.1.1.0 and later is
supported with Platform MPI 9.1.2.0 and later.
To schedule a job to LSF using mpirun, add
"-e MPI_USELSF_VERSION=9" to the mpirun
command line. Not all Platform MPI and
LSF options require this environment variable to be set, but it is safe to use
in all cases.
Typical error messages associated
with this issue were as follows:
mpirun: Failed to load wlm-lsf.so.
mpirun: Failed to load wlm-lsf.dll.
In rare cases, no error
message is produced, and the mpirun command will SIGSEGV before any remote processes are
started.
- Resolved an issue with non-existent hosts
provided as input to a Platform MPI job. When a non-existent host name was
provided as an input to a Platform MPI 9.1.2.0 job, the DNS look-up of that host could
hang.
- Enabled dynamic service level for PCMPI_IB_DYNAMIC_SL=1 and
XRC mode.
- Increased the maximum number of IPv4
addresses on a single host to 256.
- Resolved an issue with the product
"Version" string in the instr file.
© Copyright IBM
Corporation 2014
U.S.
Government Users Restricted Rights - Use, duplication or disclosure restricted
by GSA ADP Schedule Contract with IBM Corp.
IBM®,
the IBM logo and ibm.com® are
trademarks of International Business Machines Corp., registered in many
jurisdictions worldwide. Other product and service names might be trademarks of
IBM or other companies. A current
list of IBM trademarks is
available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.