Readme File for IBM Spectrum Conductor with Spark 2.3 RFE 129737

Readme file for: IBM® Spectrum Conductor with Spark

Product/Component Release: 2.3

Fix ID: conductorsparkmgmt-2.3.0_noarch_build523547-welfg

Publication date: June 21, 2019

 

This enhancement details manual steps to modify the location of the ELK_HARVEST_LOCATION environment variable in IBM Spectrum Conductor with Spark 2.3 after installation.

 

1.      Scope

2.      Configuration

3.      Copyright and trademark information

 

1.   Scope

Product version

IBM Spectrum Conductor with Spark 2.3

Prerequisite

A fresh installation of IBM Spectrum Conductor 2.2.1 or later, not an earlier version of IBM Spectrum Conductor that was upgraded to IBM Spectrum Conductor with Spark 2.2.1

2.   Configuration

a.      Log on to the cluster management console as CLUSTERADMIN.

b.      From the management console, stop all Spark Instance groups:

a)       Go to Workload > Spark > Spark Instance Groups.

b)       Select all Spark Instance Groups and stop them.

c)        Wait a few minutes before proceeding (or longer depending on the size of your cluster and any workload that needs to complete and/or data that needs to be processed by Elastic stack).

Tip: Verify that all shipper logs (filebeat.log.<hostname> at $EGO_TOP/integration/elk/log/filebeat/) log the following message continuously for a few minutes:

<date timestamp> INFO No non-zero metrics in the last 30s

c.      From the management console, stop and modify services:

a)       Go to System & Services > EGO Services > Service Profile.

b)       Stop and modify the ELK_HARVEST_LOCATION environment variable for the elk-shipper service:

Under the System Services tab, locate elk-shipper:

o   Stop the service.

o   Modify the ELK_HARVEST_LOCATION environment variable from /var/tmp/elk_logs to <new_ELK_HARVEST_LOCATION>.

o   Save your changes.

c)        Stop and modify the ELK_HARVEST_LOCATION for the SparkCleanup service:

Under the Other Services tab, locate SparkCleanup:

o   Stop the service.

o   Modify the ELK_HARVEST_LOCATION environment variable from /var/tmp/elk_logs to <new_ELK_HARVEST_LOCATION>.

o   Save your changes.

d)       Stop the ascd service.

Under the System Services tab, locate ascd and stop the service.

d.      Modify Spark Instance Groups:

a)       On all hosts where Spark Instance Groups are deployed, modify the SPARK_EGO_LOG_DIR parameter in spark-env.sh for all Spark Instance Groups:

o   Locate the Spark deployment directory (<spark_deployment_dir>) for each Spark Instance Group.

o   Modify the SPARK_EGO_LOG_DIR parameter for each Spark Instance Group:

vi <spark_deployment_dir>/*/conf/spark-env.sh

o   Modify SPARK_EGO_LOG_DIR=/var/tmp/elk_logs to SPARK_EGO_LOG_DIR=<new_ELK_HARVEST_LOCATION>.

b)       Modify the SPARK_EGO_LOG_DIR_ROOT parameter in the application template .yml file for all Spark Instance Groups:

o   Modify the SPARK_EGO_LOG_DIR_ROOT parameter for each Spark Instance Group at $EGO_CONFDIR/../../ascd/work/appinstances:

vi <SIG_UUID>.yml

o   Replace all occurrences of /var/tmp/elk_logs to <new_ELK_HARVEST_LOCATION>.

Note: A minimum of 4 instances are prefixed with SPARK_EGO_LOG_DIR_ROOT:

Note: There should be a minimum of 1 instance in command:

c)        On all hosts where Spark Instance Groups are deployed, move content from /var/tmp/elk_logs to <new_ELK_HARVEST_LOCATION>:

mv /var/tmp/elk_logs/* <new_ELK_HARVEST_LOCATION>

e.      From the management console, start services:

a)       Go to System & Services > EGO Services > Service Profile.

b)       Under the System Services tab, start the elk-shipper service.

c)        Wait 5 minutes after the elk-shipper service is started to ensure that the new ELK_HARVEST_LOCATION is registered by the service monitor.

d)       Under the Other Services tab, locate the SparkCleanup service and start it (this will trigger the ascd service to also start).

e)       If the ascd service is not started, under the Other Services tab, locate the ascd service and start it.

f)         Go to Workload > Spark Instance Groups, select all Spark Instance Groups, and start them.

f.       For future upgrade to IBM Spectrum Conductor 2.4 and later, ensure that you set the ELK_HARVEST_LOCATION as part of upgrade steps:

export ELK_HARVEST_LOCATION=<new_ELK_HARVEST_LOCATION>

3.   Copyright and trademark information

© Copyright IBM Corporation 2019

U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

IBM®, the IBM logo and ibm.com® are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.