Readme for IBM® Spectrum Conductor 2.5.0 Interim Fix 600833
Readme
file for: IBM
Spectrum Conductor
Product/Component release: 2.5.0
Update name: Interim Fix 600833
Fix ID: sc-2.5-build600833
Publication date: November 29, 2021
Interim
fix to add two new environment variables to control the default working
directory for Jupyter notebook Spark Python kernels.
Contents
1.
List of fixes
2.
Download location
3.
Installation and configuration
4.
List of files
5.
Product notifications
6.
Copyright and trademark information
1.
List of fixes
APAR: P104449
2.
Download location
Download interim fix
600833 from the following location: http://www.ibm.com/eserver/support/fixes/
3.
Installation and
configuration
Follow the instructions in this section to download
and install this interim fix to your cluster.
System requirements
Linux x86_64 or Linux ppc64le
Before installation
If you are updating
an existing notebook, back up the notebook base data directory.
Note: For updated notebook packages, the notebook is undeployed and the
new version is deployed. Therefore, if you specified the notebook base data
directory under, or as the same as, the notebook's deployment directory, the
base data directory is removed. To retain your data, manually back up the
contents of the base data directory before you update the instance group.
a. Log in to the
cluster management console as the cluster administrator.
b.
Click
Workload > Instance Groups, then click the instance group that you want to
check.
c. Click Manage, and then Configure.
d.
In
the Notebooks tab, click the Configuration link in the Notebooks
section, then check the Base data directory value.
Note: If the notebook
base data directory is under or is the same as the notebook’s deployment
directory, back up the base data directory by running the following commands
from the command line:
> mkdir -p /tmp/backup
> cp -a base_data_directory/instance_group_name /tmp/backup
Ensure that you back up the notebook base data directory for each instance group that
you want to upgrade.
Installation
a.
Log in to the cluster management console as the
cluster administrator.
b.
Download the sc-2.5.0.0_build600833.tgz package and extract its contents
to get the Jupyter-6.0.0.tar.gz file for the Jupyter 6.0.0 notebook.
c.
Add the Jupyter 6.0.0 package to your cluster:
To update an existing
notebook:
1) Click Resources > Frameworks
> Notebook Management,
select Jupyter, and click Configure.
2)
In
the Deployment Settings tab, click Choose
File in the Package section.
3)
Select
the Jupyter 6.0.0 package.
4)
Click
Update Notebook.
To add a new notebook:
1)
Click
Resources > Frameworks > Notebook
Management, and then click Add.
2)
In
the Deployment Settings tab, click Choose
File in the Package section.
3)
Select
the Jupyter 6.0.0 package.
4)
Set the following parameters:
·
Name: Jupyter
·
Version: 6.0.0
·
Start command: ./scripts/jupyterservicewrapper.sh
start_jupyter.sh
·
Stop command: ./scripts/stop_jupyter.sh
·
Job monitor command: ./scripts/jobMonitor.sh
·
Longest update interval for job
monitor: 280
5)
Select these options:
·
Enable collaboration for the notebook
·
Supports SSL
·
Supports user impersonation
·
Anaconda required
6) Click Add.
Post installation
a.
Install Jupyter Enterprise Gateway in the Anaconda
environment, which will be used to host the Jupyter notebook.
b.
From the cluster
management console, click Workload
> Instance Groups.
1) Create a new instance group that uses Jupyter 6.0.0. For details, see Creating
instance groups.
2) If required, update your
existing instance groups that use Jupyter 6.0.0. For details, see Modifying
instance groups.
3) Optionally
use the new environment variables:
Name: CONDUCTOR_KERNEL_WORK_DIR
Value: File path
Description:
Use this environment variable to specify the desired directory for the default
Spark Python kernel work directory for Jupyter
notebooks.
To set the value to $NOTEBOOK_DATA_DIR/notebooks, the following
environment variable may be used instead.
Name: SET_KERNEL_DIR_TO_NB_DIR
Value: true | false
Description:
Use this environment variable to specify whether the default Spark Python
kernel work directory will be set to be $NOTEBOOK_DATA_DIR/notebooks. If set, this value
will take precedence over CONDUCTOR_KERNEL_WORK_DIR.
c.
For the backed-up instance groups, restore the notebook
base data directory files:
> cp -a /tmp/backup/instance_group_name base_data_dircectory
d.
Verify
that permissions and ownership of the replaced files are the same as they were
before applying the fix. Update any file permissions or ownership as required.
4.
List of files
Jupyter-6.0.0.tar.gz
5.
Product notifications
To receive
information about product solution and patch updates automatically, subscribe
to product notifications on the My
Notifications page http://www.ibm.com/support/mynotifications/ on the IBM Support website (http://support.ibm.com). You can edit your
subscription settings to choose the types of information you want to get
notification about, for example, security bulletins, fixes, troubleshooting,
and product enhancements or documentation changes.
6.
Copyright and trademark information
© Copyright IBM
Corporation 2021
U.S. Government Users
Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
IBM®, the IBM logo and
ibm.com® are trademarks of International Business Machines Corp., registered in
many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is
available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml