Readme File for IBM® Spectrum Symphony 7.1.2 and
IBM® Spectrum Conductor with Spark 2.2.1 Interim Fix 505506
Readme File for: IBM Spectrum Symphony and IBM Spectrum Conductor with Spark
Product Release: 7.1.2 and 2.2.1
Update Name: Interim Fix 505506
Fix ID: sym-7.1.2-cws-2.2.1-build505506-jpmc
Publication Date: November 15, 2018
This interim fix provides
a consolidated patch that includes several fixes and enhancements for a cluster
with IBM Spectrum Symphony 7.1.2 and IBM Spectrum Conductor with Spark 2.2.1
installed:
· cws-2.2-build444867 · cws-2.2.1-build484101-jpmc · cws-2.2-build485340 · cws-2.2.1-build487869 · cws-2.2-build488099-jpmc · cws-2.2.1-build491029 · cws-2.2.1-build491032-jpmc · cws-2.2.1-build492199 · cws-2.2.1-build494339-jpmc · cws-2.2.1-build495310-jpmc · cws-2.2.1-build496653 · cws-2.2.1-build500186-jpmc · cws-2.2.1-build501124-jpmc · cws-2.2.1-build504187-jpmc · cws-2.2.1-build504656 · cws-2.2.1-build504802 · cws-2.2.1-build504932-jpmc · cws-2.2.1-build505817 · cws-2.2.1-build506079 · lsf-10.1-build494337 · sym-7.1.2-cws-2.2.1_x86_64-build493380-jpmc · sym-7.1.2-cws-2.2.1_x86_64-build496537 |
· sym-7.1.2-build465569 · sym-7.1.2-build478868 · sym-7.1.2-build484546-jpmc · sym-7.1.2-build484698-jpmc · sym-7.1.2-build489386 · sym-7.1.2-build491231 · sym-7.1.2-build492724 · sym-7.1.2-build493076-jpmc · sym-7.1.2-build493221-jpmc · sym-7.1.2-build493865-jpmc · sym-7.1.2-build494674-jpmc · sym-7.1.2-build497975 · sym-7.1.2-build497982 · sym-7.1.2-build498646-jpmc · sym-7.1.2-build501396 · sym-7.1.2-build501634-jpmc · sym-7.1-build501649-jpmc · sym-7.1.2-build502161 · sym-7.1.2-build502983-jpmc · sym-7.1.2-build503792-jpmc · sym-7.1.2-build498160-jpmc · sym-7.2.0.2-build503885 · Fix for dataloader error and ES client NULL issue |
Contents
1. List of fixes
2. Download location
3. Product and components affected
4. Installation and
configuration
5. Uninstallation
6. List of files
7. Product notifications
8. Copyright and
trademark information
1.
List of fixes
APARs:
P102143,
P102505, P102530, P102552, P102542, P102576, P102577, P102574, P102600,
P102617, P102618, P102628, P102307, P102778, P102782, P102785, P102786,
P102615, P102808, P102610, P102643, P102195, P102460, P102554, P102578,
P102369, P102612, P102636, P102607, P102741, P102727, P102737, P102753,
P102668, P102755, P102468, P102496
RFEs:
120489,
120489, 126033, 108049, 106501, 97838, 109152, 120488, 119214, 115427, 120147,
119841, 119580
2. Download location
Download
interim fix 505506 from the following location: https://www.ibm.com/eserver/support/fixes/
3. Product and components affected
•
PMC,
REST, EGO, SOAM
•
ELK,
WEBGUI, ascd, conductorspark_gui
•
WebServiceGateway,
ServiceDirector
•
Spark
version 2.1.1, Spark version 2.3.1
•
Jupyter-4.1.0,
Jupyter-5.0.0
•
LSF/mbatchd,
LSF/mbschd
4. Installation and configuration
Follow the instructions in this section to download
and install this interim fix in your cluster:
4.5 Install Spark
versions and updated notebook packages
4.6 Install the IBM
Spectrum LSF patch
Linux x86_64
Platform |
Package to
download |
Description |
x86_64 |
sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_binary.tar.gz |
Package containing binaries
installed under $EGO_TOP in build list |
x86_64 |
sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_plugin.tar.gz |
Plug-in package containing
the source code (.c), the header source file (.h), the Makefile, and the
library (.a) for building a custom plug-in. Also contains
EGO 3.5 or EGO 3.6 header files, which are required for applications based on
EGO 3.1 APIs to work. |
x86_64 |
sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_spark.tar.gz |
Spark package containing
Spark versions 2.1.1/2.3.1 and Jupyter notebook versions 4.1.0/5.0.0. |
x86_64 |
sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_lsf.tar.gz |
Package containing IBM
Spectrum LSF builds |
x86_64 |
patch_script_build505506.sh |
Script to apply the binary
patch |
a.
Log in to the cluster management console as the cluster
administrator and stop all Spark instance groups.
b. Log on to the master host as the cluster administrator, disable applications, and shut down the cluster:
$ egosh user logon -u Admin -x Admin
$ soamcontrol app disable all
$ egoshutdown.sh
c. Download the patch_script_build505506.sh and sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_binary.tar.gz file, both to the same folder.
d. Source your environment and run the script as follows:
$ ./patch_script_build505506.sh patch
e. Edit the $EGO_CONFDIR/../../gui/conf/navigation/pmc_menu.xml file to leave the URL value empty for standardReport and customReport items, for example:
<MenuItems
id="eventAndReportTreeSource">
<MenuItem id="standardReport" label="@{pmc.tree.node.standarReport.label}"
status="" url=""
layoutSourceId="standarReportLayout" tabGroup=""
highlightTabId=""
aclResource="Main_perf_standard" aclPermission="1"
helpGroupId="standardreport"
/>
<MenuItem id="customReport" label="@{pmc.tree.node.customerReport.label}"
status="" url=""
layoutSourceId="customReportLayout" tabGroup=""
highlightTabId=""
aclResource="Main_perf_custom" aclPermission="1"
helpGroupId="customerreport"
/>
f. Edit
the $EGO_CONFDIR/../../gui/conf/pmcconf/pmc_conf_ego.xml
file to add the SaveResourcePlanHybridPolicyOwnershipUnbalanced <Parameter> node as a child
node of the <Configuration>
node:
<Parameter>
<Name>SaveResourcePlanHybridPolicyOwnershipUnbalanced</Name>
<!--Defines whether to save the hybrid scheduling
policy that is not balanced. The value is either "true" or
"false". -->
<!--If set to "true", you can click
either "OK" to continue or "Cancel" to not save the current
changes. -->
<!--If set to "false", an alert appears
to check the unbalanced consumer field values, and you can click "OK"
to not save the current changes. -->
<Value>true</Value>
</Parameter>
g.
Ensure that the RestrictHostLogRetrieve
parameter is enabled in the $EGO_CONFDIR/../../gui/conf/pmcconf/pmc_conf_ego.xml
file:
<ParamConfs>
...
<Configuration>
...
<Parameter>
<Name>RestrictHostLogRetrieve</Name>
<Value>true</Value>
</Parameter>
...
</Configuration>
</ParamConfs>
h. In the pmc_conf_ego.xml file, update the WhitelistLogsDir parameter to use regular expressions, for example:
<ParamConfs>
...
<Configuration>
...
<Parameter>
<Name>WhitelistLogsDir</Name>
<Value>(${EGO_TOP}/.+);(${SOAM_HOME}/.+)</Value>
</Parameter>
</Configuration>
</ParamConfs>
Regular expressions specify a set of strings required for a particular purpose. A simple way to specify a finite set of
strings is to list its elements or members. The following patterns are
supported:
Character classes |
Anchors |
||
\s |
White space |
^ |
Start of string, or start of
line in multi-line pattern |
\S |
Not white space |
$ |
End of string, or end of line in multi-line pattern |
\d |
Digit |
|
|
\D |
Not digit |
Quantifiers |
|
\w |
Word |
+ |
One or more |
\W |
Not word |
* |
Zero or more times |
\x |
Hexadecimal digit |
{n} |
Exactly n times |
\O |
Octal digit |
? |
Once or none |
Special
characters |
Group and
Ranges |
||
\n |
New line |
(a|b) |
a or b |
\r |
Carriage return |
(….) |
Group |
\t |
Tab |
[abc] |
Range (a or b or c) |
. (dot) |
Any character except line break |
[^abc] |
Not (a or b or c) |
Here are some examples of how you can use a regular
expression in the WhitelistLogsDir parameter:
•
To allow access to all files and subfolders under
EGO_TOP, use:
${EGO_TOP}/.*
•
To deny access to relative subfolders (like EGO_TOP/../forbidden), use:
${EGO_TOP}/(?!.*\.\./).*
•
To allow access to any file under folders named log or logs under
EGO_TOP, use:
${EGO_TOP}/.*/(log|logs)/.+
•
To allow access to only files under EGO_LOGDIR with .log as the
file extension, or in the file names, use:
${EGO_TOP}/kernel/log/.+\.log.*
•
To allow only files with .log in the
file names followed by a string (such as the host name) which might end with
one or two digits, use:
${EGO_TOP}/kernel/log/.+\.log\..+[0-9]??
For reference and testing, try https://regex101.com.
i. Edit the $EGO_ESRVDIR/esc/conf/services/wsg.xml and $EGO_ESRVDIR/esc/conf/services/named.xml files to add the following EGO_TOP environment variable:
<ego:ActivitySpecification>
<ego:Command>${EGO_TOP}/3.6/scripts/egosrvloader.sh named
-u <user> -f</ego:Command>
<ego:EnvironmentVariable
name="EGO_TOP">${EGO_TOP}</ego:EnvironmentVariable>
<ego:ExecutionUser>root</ego:ExecutionUser>
j. Configure the new supported parameters in $EGO_CONFDIR/../../ascd/conf/ascd.conf by adding the following lines to the file and uncommenting the desired parameters:
###################################
#Enforce
specific security settings
###################################
#Enforce
the Spark master to either authenticate and authorize, or to trust the specific
submission user.
#If
either EGO_AUTH or EGO_TRUST, users will not be able to choose by themselves
during Spark instance group registration or modification.
#A
value of "EGO_AUTH" will force authentication to be enabled. A value
of "EGO_TRUST" will enforce authentication to be disabled, and all
submission users will be trusted.
#CONDUCTOR_SPARK_ENFORCE_SPARK_EGO_AUTH_MODE=EGO_AUTH
#Enforce
SSL encryption parameters in Spark
#If
a value is set, users will not be able to choose by themselves during Spark
instance group registration or modification.
#A
value of "WORKLOADANDSPARKUIS" - Enforce enable SSL for workload and
Spark UIs (the master UI, driver UI, and history service UI)
#A
value of "WORKLOADONLY" - Enforce enable SSL for workload only
#A
value of "SPARKUISONLY" - Enforce enable SSL for Spark UIs only
#A
value of "DISABLE" - Enforce disable SSL
#CONDUCTOR_SPARK_ENFORCE_ENCRYPTION=WORKLOADANDSPARKUIS
#Enforce
Spark security authentication and SASL encryption in Spark Security parameters
#If
either TRUE or FALSE, users will not be able to choose by themselves during
Spark instance group registration or modification.
#A
value of "TRUE" will enforce spark.authenticate
to be true.
#A
value of "FALSE" will enforce that parameter to be false.
#CONDUCTOR_SPARK_ENFORCE_SECURITY_SPARK_AUTH=TRUE
#Enforce
Notebook SSL
#If
either TRUE or FALSE, users will not be able to choose by themselves during
Spark instance group registration or modification.
#A
value of "TRUE" will enforce notebook SSL to be true.
#A
value of "FALSE" will enforce notebook SSL to be false.
#CONDUCTOR_SPARK_ENFORCE_NOTEBOOK_SSL=TRUE
NOTE: The CONDUCTOR_SPARK_ENFORCE_SECURITY_SPARK_AUTH_AND_SASL_ENCRYPT
parameter in is no longer valid. If you previously set this parameter,
ensure that you remove it from ascd.conf.
k. (Optional) To control access to soft links, add the EGO_RFA_ALLOW_SOFTLINK parameter to the $EGO_CONFDIR/ego.conf file. Valid values are y or Y to allow access, and n or N to deny access, If EGO_RFA_ALLOW_SOFTLINK is not defined, access to the contents of soft link files is allowed. For example:
EGO_RFA_ALLOW_SOFTLINK=n
l. Edit the $EGO_CONFDIR/ego.conf file to enable the EGO_ENABLE_UNAVAILABLE_HOST_IN_RG parameter as y or Y, for example:
EGO_ENABLE_UNAVAILABLE_HOST_IN_RG=Y
m. Edit the $EGO_CONFDIR/wsm.conf file and add the new GUI_PROXY_HOSTNAME parameter, for example:
GUI_PROXY_HOSTNAME=your_proxy_host_name.example.abc.com
n. Edit the application profile to add the SOAM_DDT_WORK_DIR environment variable in the osType section. For example, if SOAM_DDT_WORK_DIR=/tmp, DDT work data will be stored under /tmp/work/datamanager/:
<osType
name="all" startCmd="${SOAM_DEPLOY_DIR}/xxx "
workDir="${SOAM_HOME}/work">
<env
name="SOAM_DDT_WORK_DIR">/tmp/</env>
</osType>
Register the modified application
profile to apply your changes:
$ soamreg <appProfileName>
o. As the root user, grant permissions to the egocontrol binary as follows:
$ chown root $EGO_TOP/3.6/linux-x86_64/bin/egocontrol
$ chmod 700 $EGO_TOP/3.6/linux-x86_64/bin/egocontrol
$ chmod u+s $EGO_TOP/3.6/linux-x86_64/bin/egocontrol
$ setfacl -m u:CLUSTERADMIN:x $EGO_TOP/3.6/linux-x86_64/bin/egocontrol
p. On each management host, edit the following parameters in the $EGO_CONFDIR/ego.conf file to enable EGO audit logging:
EGO_AUDIT_LOG=Y
EGO_AUDIT_LOGDIR=$EGO_TOP/audits
EGO_AUDIT_LOGMASK=LOG_INFO
q. On each management host, edit the EGO_CONSUMER_LEVEL_EXCLUSIVE_BIGRATIO_RG parameter in the $EGO_CONFDIR/ego.conf file to specify resource groups from which hosts are selected for slot allocation using the default stacked policy.
Specify a list of resource groups, separated by spaces within quotations,
as follows:
EGO_CONSUMER_LEVEL_EXCLUSIVE_BIGRATIO_RG="rg1
rg2 …"
NOTE: Ensure that the specified
resource groups are configured with the consumer-level exclusive policy.
r. Configure SOAM_HISTORY_FSYNC_INTERVAL to control history flush as follows:
• To apply this function to all
applications, define this parameter as an environment variable in the $EGO_CONFDIR/../../eservice/esc/conf/services/sd.xml file. For example:
<sc:ActivityDescription>
<ego:Attribute
name="hostType"
type="xsd:string">X86_64</ego:Attribute>
<ego:ActivitySpecification>
<ego:EnvironmentVariable
name="SOAM_HISTORY_FSYNC_INTERVAL">10</ego:EnvironmentVariable>
......
</ego:ActivitySpecification>
</sc:ActivityDescription>
• To apply this function to specific
applications, configure this parameter in the SSM section of each application profile. When defined,
this value takes precedence over the value configured in sd.xml. For example, to set the session and
task history fsync interval to 10 seconds for the symping7.1.2 application, configure the symping7.1.2.xml file as follows:
<SSM resReq=""
shutDownTimeout="300" startUpTimeout="60"
workDir="${EGO_SHARED_TOP}/soam/work">
<osTypes>
<osType
name="all"
startCmd="${SOAM_HOME}/${VERSION_NUM}/${EGO_MACHINE_TYPE}/etc/ssm"
workDir="${EGO_CONFDIR}/../../soam/work">
…
<env
name="SOAM_HISTORY_FSYNC_INTERVAL">10</env>
</osType>
</osTypes>
</SSM>
s. If $EGO_TOP/3.6/linux-x86_64/lib/libbatchjni.so exists, copy $EGO_TOP/3.5/linux-x86_64/lib/libbatchjni.so to $EGO_TOP/3.6/linux-x86_64/lib/libbatchjni.so
t. Clear your browser cache.
u. Start
the cluster and enable applications:
$ egosh ego
start all
$ soamcontrol
app enable <AppName>
a. Copy the sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_plugin.tar.gz file to your build host and decompress the package to a directory, which is hereafter referred to as the "extract directory".
You should see the following files and folders in the extract directory:
•
sec_ego_ext_plugin.a: Static library used for building the custom plug-in.
•
sec.h: Header file to be included by the
customized source code file.
•
sample/sec_customize_auth.c: Sample source code file.
•
sample/Makefile: Sample Makefile.
b. Place
the customized source code file and Makefile in
the extract directory and build the custom plug-in.
Use the following steps as a reference to build your custom plug-in:
a) Copy the sample source code file and
sample Makefile from the
sample/ subdirectory
to the extract directory.
b) Edit the Makefile in the extract directory and set the
GCC value to the full path of your GCC. For example:
# Define the value of GCC to
your own gcc full path, use GCC4.8.2
GCC=/usr/bin/gcc
c) To build with the static linked OpenSSL
library, copy libcrypto.a from your local directory to the
extract directory and edit the Makefile:
Comment out the following command:
$(GCC) -g -fPIC -shared
-nostartfiles $(SEC_EXT_PLUGIN_OBJ) $(OBJS) -o sec_ego_ext_custom.so -lstdc++
-lpthread -lcrypto –ldl
Uncomment the following command:
$(GCC) -g -fPIC -shared
-nostartfiles $(SEC_EXT_PLUGIN_OBJ) ./libcrypto.a
$(OBJS) -o sec_ego_ext_custom.so -lstdc++ -lpthread -ldl
d) In the extract directory, run the "make" command to build the plug-in:
$ make
You have now built the plug-in,
named sec_ego_ext_custom.so.
c. Ensure that file ownership for sec_ego_ext_custom.so is set to the cluster administrator
account and file permissions are set to 644.
d. Log
on to the master host as the cluster administrator, disable applications in IBM
Spectrum Symphony, and shut down the cluster:
$ soamcontrol app disable all
$ egosh service stop all
$ egosh ego shutdown all
e. Back
up the sec_ego_ext_custom.so under the $EGO_LIBDIR directory.
f. Log on to each host in the cluster
and copy the custom plug-in that you built previously (sec_ego_ext_custom.so) to the $EGO_LIBDIR directory.
g. Configure
authentication through the master plug-in on management hosts. On each
management host, edit the following parameters in the $EGO_CONFDIR/ego.conf
file:
•
EGO_SEC_PLUGIN: Specify cluster authentication through
the master plug-in (sec_ego_master):
EGO_SEC_PLUGIN=sec_ego_master
•
EGO_SEC_CONF:
Specify the plug-in’s configuration in the format "path_to_plugin_conf_dir,created_ttl,plugin_log_level,path_to_plugin_log_dir",
where:
o path_to_plugin_conf_dir
(required): Specifies the absolute path to $EGO_CONFDIR, where
the master plug-in’s configuration file is located.
o created_ttl
(optional): Master plug-in does not use this parameter; specify its value as 0.
o plugin_log_level (optional):
Specifies the log level for the master plug-in. Valid values are DEBUG , INFO, WARN, or ERROR.
o path_to_plugin_log_dir (optional):
Specifies the absolute path to the directory where the master plug-in's logs
are located.
For example:
EGO_SEC_CONF="/opt/egoshare/kernel/conf,0,ERROR,/opt/cluster/MH/kernel/log"
h.
On each management host, edit
the following parameters in the $EGO_CONFDIR/masterauth.conf
file.
•
EGO_SEC_SUB_PLUGIN1: Specify the custom plug-in (or sample
plug-in that uses the same name) as the first sub plug-in:
EGO_SEC_SUB_PLUGIN1=sec_ego_ext_custom
•
EGO_SEC_SUB_CONF1:
Specify the first sub plug-in’s configuration in the format "path_to_plugin_conf_dir,created_ttl,plugin_log_level,path_to_plugin_log_dir",
where:
o path_to_plugin_conf_dir
(required): Specifies the absolute path to $EGO_CONFDIR,
where the custom plug-in’s (or sample plug-in’s) configuration file is located.
o created_ttl (optional):
Specifies a time-to-live duration for the authentication token sent from the
client to the server. Valid values are 0 or empty (indicating that the default
value of 10 hours must be used).
o plugin_log_level (optional):
Specifies the log level for the custom plug-in (or sample plug-in). Valid
values are DEBUG, INFO, WARN, or ERROR. As a best practice, set the log level
as ERROR or WARN. A lower level causes too many messages to be logged, making
it harder to troubleshoot if required.
o path_to_plugin_log_dir (optional):
Specifies the absolute path to the directory where the custom plug-in’s (or
sample plug-in’s) logs are located.
For example:
EGO_SEC_SUB_CONF1="/opt/egoshare/kernel/conf,0,ERROR,/opt/cluster/MH/kernel/log"
•
EGO_SEC_SUB_EXTRA_CONF1: check
whether user logons are encrypted with RSA algorithms. Valid value is Y or N. For
example:
EGO_SEC_SUB_EXTRA_CONF1="ENFORCE_RSA=Y"
When enabled, a message similar to
the following is logged to the ego_ext_plugin_server.log:
WARN [27440] server_start():
RSA usage check is enabled.
•
EGO_SEC_SUB_PLUGIN2: Specify the default plug-in as the second
sub plug-in:
EGO_SEC_SUB_PLUGIN2=sec_ego_default
•
EGO_SEC_SUB_CONF2:
Specify the second sub plug-in’s configuration in the format "path_to_plugin_conf_dir,created_ttl",
where:
o path_to_plugin_conf_dir
(required): Specifies the absolute path to $EGO_CONFDIR,
where the default plug-in’s database file is located.
o created_ttl (optional):
Specifies a time-to-live duration for the authentication token sent from the
client to the server. Valid values are 0 or empty (indicating that the default
value of 10 hours must be used).
For example:
EGO_SEC_SUB_CONF2="/opt/egoshare/kernel/conf"
i.
If the custom plug-in has a configuration
file, create the file under $EGO_CONFDIR on all management
hosts and configure its parameters.
For
the sample plug-in, create the customauth.conf
file under $EGO_CONFDIR and configure users in the file. For
example, define two test users to demonstrate the enhancement via the sample
plug-in as follows:
test_user1=pass
test_user2=fail
j. Configure
the authentication plug-in on compute and client hosts:
•
If
a compute or client host uses the default plug-in for authentication, modify
the EGO_SEC_PLUGIN
parameter in the ego.conf
file as follows:
EGO_SEC_PLUGIN=sec_ego_default
The ego.conf file is at $EGO_CONFDIR/ on
compute hosts and at $SOAM_HOME/conf/ on client hosts.
•
If
the compute or client host uses the custom plug-in (or sample plug-in) for
authentication, modify the EGO_SEC_PLUGIN
parameter in the ego.conf
file as follows:
EGO_SEC_PLUGIN=sec_ego_ext_co
The ego.conf file is at $EGO_CONFDIR/ on
compute hosts and at $SOAM_HOME/conf/ on client hosts.
k.
From the master host, restart EGO:
$ egosh ego start all
l.
The
header files for the EGO 3.5 and EGO 3.6 APIs are in the egoapi.tar.gz package. The sample codes are in cmd/egoconsole.c and include the following samples:
• How
to get the roles of a user.
• How
to get the service list.
• How
to create a role that can create, modify, or delete a user and assign or
unassign roles to a user.
• How
to create a user.
• How
to assign a role to a user.
• How
to unassign a role from a user.
• How
to delete a user.
To run the sample:
a)
Decompress the egoapi.tar.gz
package.
b)
Change to the cmd
directory:
i. Edit
the Makefile to set the EGO_TOP parameter to the IBM
Spectrum Symphony installation directory. Also, set the EGO_VERSION.
ii. Run
"make".
The binary egocmd
is created.
c)
Source the environment for your cluster.
d)
Run the "egocmd"
command to test the sample.
4.5
Install Spark versions and updated notebook packages
a. Log in to the
cluster management console as the cluster administrator and stop all Spark
instance groups.
b.
On your client machine,
unzip the sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_spark.tar.gz package,
for example:
$ mkdir -p /tmp/fix505506_spark
$ tar zoxf
sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_spark.tar.gz -C /tmp/fix505506_spark
c. Launch the browser and clear the browser cache; then, log in to the
cluster management console as the administrator.
•
Install Spark 2.1.1:
a. Remove the Spark 2.1.1 package if it exists.
a) Click Workload > Spark > Version Management.
b) Select 2.1.1.
c) Click Remove.
b. Add the Spark 2.1.1 package to your cluster.
a) Click Workload > Spark > Version Management.
b) Click Add.
c)
Click Browse and
select the /tmp/fix505506_spark/Spark2.1.1-Conductor2.2.1.tgz package.
d) Click Add.
•
Install Spark 2.3.1:
a. Remove the Spark 2.3.1 package if it exists.
a)
Click Workload > Spark
> Version Management.
b) Select 2.3.1.
c) Click Remove.
b.
Add the Spark 2.3.1
package to your cluster.
a) Click Workload > Spark > Version Management.
b) Click Add.
c) Click Browse and select the
/tmp/fix505506_spark/Spark2.3.1-Conductor2.2.1.tgz.
d) Click Add.
d.
Update the Jupyter
notebook packages in your cluster.
NOTE: This update can be applied to one Spark instance group at a time; Spark instance groups that are not updated continue to work as is. After updating the notebook permissions, the permissions cannot be reverted. Spark instance groups apply only to IBM Spectrum Conductor with Spark.
•
Update
the Jupyter 4.1.0
notebook:
a) Extract /tmp/fix505506_spark/Jupyter-4.1.0.tar.gz package to a temporary directory, replace the Anaconda2-4.1.1-Linux-x86_64.sh package with your
customized Anaconda package, and then regenerate the Jupyter-4.1.0.tar.gz package.
b) Copy the regenerated Jupyter-4.1.0.tar.gz package to a host with web browser access.
c) From
the cluster management console, navigate to Workload > Spark >
Notebook Management.
d) Select
Jupyter and click Configure.
e) Click
Browse and locate the Jupyter
notebook package that you copied in step b.
f) Modify
the Start command field to add "--disable_terminal true", for example:
$ ./scripts/start_jupyter.sh
--disable_terminal true
g) Remove
the Prestart Command and leave it
empty.
h) Click
Update Notebook.
•
Update the Jupyter 5.0.0 notebook:
a)
Extract /tmp/fix505506_spark/JupyterPython3-5.0.0.tar.gz package to a temporary directory, replace
the Anaconda3-4.4.0-Linux-x86_64.sh
package with your
customized Anaconda package, and then regenerate the JupyterPython3-5.0.0.tar.gz package.
b)
Copy
the regenerated JupyterPython3-5.0.0.tar.gz package to a host with web browser access.
c)
From the cluster management console, navigate
to Workload > Spark > Notebook Management.
d)
Select JupyterPython3
and click Configure.
e)
Click Browse
and locate the Jupyter notebook package that you copied in step b- ii.
f)
Modify the Start command field to add "--disable_terminal true", for example:
$ ./scripts/start_jupyter.sh
--disable_terminal true
g) Remove
the Prestart Command and leave it empty.
h) Click
Update Notebook.
e.
Create a new Spark
instance group that uses the new Spark and Notebook package. For details, see Creating
Spark instance groups.
f.
If required, upgrade your
existing Spark instance groups to use the new Spark version 2.1.1 and Spark
version 2.3.1 package. For details, see Updating
the Spark version and notebook packages for existing Spark instance groups.
NOTE: For existing Spark instance
groups, updating does not involve deleting and re-creating Spark instance
groups. This patch takes effect for both newly created and updated Spark
instance groups.
g. Restart the Spark instance group after the package has finished updating.
4.6
Install the IBM Spectrum LSF patch
a.
Log on to the LSF master
host as "root".
b. Source the environment profile.
c. On your client machine, unzip the sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_lsf.tar.gz package, for example:
$ mkdir -p /tmp/fix505506_lsf
$ tar zoxf sym-7.1.2.0-cws-2.2.1.0_x86_64-build505506_lsf.tar.gz
-C /tmp/fix505506_lsf
d. Run mbatchd -V
to select the binary type.
e. Unzip /tmp/fix505506_lsf/lsf10.1_linux2.6-glibc2.3-x86_64-494337.tar.Z
or /tmp/fix505506_lsf/lsf10.1_linux3.10-glibc2.17-x86_64-494337.tar.Z
f.
Go to the $LSF_SERVERDIR
directory:
$ cd $LSF_SERVERDIR
g. Rename the following files:
$ mv $LSF_SERVERDIR/mbatchd
$LSF_SERVERDIR/mbatchd.bak
$ mv $LSF_SERVERDIR/mbscd $LSF_SERVERDIR/mbschd.bak
h.
Copy the files to $LSF_SERVERDIR:
$ cp mbatchd $LSF_SERVERDIR/mbatchd
$ cp mbschd $LSF_SERVERDIR/mbscd
i.
When logged on to the LSF
master host as "root", run badmin mbdrestart.
a. Log on to each Linux host in your cluster as "root".
b.
Run the following
commands:
$ rpm -e conductorsparkmgmt-2.2.0.0 --dbpath
$dbpath
where
$dbpath
points to your original RPM DB location.
$ rpm -e conductorsparkcore-2.2.0.0
--dbpath $dbpath
$ rpm -e conductormgmt-2.2.0.0 --dbpath
$dbpath
$ rpm -e ascd-2.2.0.0 --dbpath $dbpath
$ rpm -e egogpfsmonitor-3.4.0.0
--dbpath $dbpath
$ rpm -e egoyarn-3.4.0.0 --dbpath
$dbpath
$ rpm -e egomgmt-3.4.0.0 --dbpath
$dbpath
$ rpm -e egorest-3.4.0.0 --dbpath
$dbpath
$ rpm -e egowlp-8.5.5.9 --dbpath
$dbpath
$ rpm -e egocore-3.4.0.0 --dbpath
$dbpath
$ rpm -e egoelastic-1.2.0.0 --dbpath
$dbpath
$ rpm -e egogpfsmonitor-3.5.0.0
--dbpath $dbpath
$ rpm -e egomgmt-3.5.0.0 --dbpath
$dbpath
$ rpm -e egorest-3.5.0.0 --dbpath
$dbpath
$ rpm -e egocore-3.5.0.0 --dbpath
$dbpath
5. Uninstallation
If required, follow the instructions in this section to uninstall this
interim fix from your cluster.
a. Log in to the cluster management console as the cluster administrator and stop all Spark instance groups.
b.
Log on to the master host as the cluster administrator, disable
applications, and shut down the cluster:
$
egosh user logon -u Admin -x Admin
$ soamcontrol
app disable all
$ egoshutdown.sh
c.
Source your environment and make sure that the 505506_backup.tar file generated during
installation is under the $EGO_TOP folder. Run the script as follows:
$ ./patch_script_build505506.sh rollback
d.
Start
the cluster and enable applications:
$ egosh ego
start all
$ soamcontrol
app enable <AppName>
e.
Verify that the following commands return the original
WEBGUI host name in the description:
$ egosh client view GUIURL_1
$ egosh client view GUISSOURL_1
f.
Restart
Spark instance groups.
g.
Log on to the LSF master
host as "root".
h.
Rename the following
files:
$ mv $LSF_SERVERDIR/mbatchd
$LSF_SERVERDIR/mbatchd.old
$ mv $LSF_SERVERDIR/mbscd $LSF_SERVERDIR/mbschd.old
$ mv $LSF_SERVERDIR/mbatchd.bak
$LSF_SERVERDIR/mbatchd
$ mv $LSF_SERVERDIR/mbscd.bak
$LSF_SERVERDIR/mbschd
i. Run badmin mbdrestart
6. List of files
Refer to the checksum.md5 file for details.
7.
Product notifications
To receive information about product solution and patch updates
automatically, subscribe to product notifications on the My Notifications page http://www.ibm.com/support/mynotifications/
on the IBM Support website (http://support.ibm.com). You can edit your
subscription settings to choose the types of information you want to get
notification about, for example, security bulletins, fixes, troubleshooting,
and product enhancements or documentation changes.
8.
Copyright and
trademark information
© Copyright IBM Corporation 2018
U.S. Government Users Restricted
Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
IBM®, the IBM logo, and ibm.com®
are trademarks of International Business Machines Corp., registered in many
jurisdictions worldwide. Other product and service names might be trademarks of
IBM or other companies. A current list of IBM trademarks is available on the
Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.