Use this page to configure the startup, fallback, and fallover
policies, as well as to specify whether to enable Workload Partitions
(WPAR) for the resource group.
Fields
- Startup policy
- Select one of the following startup policies for the resource
group:
- Online On Home Node Only
- Select this option to have the resource group brought online only
on its home (highest priority) node during startup of the resource
group. This option requires that the highest priority node be available,
which is first node in the node list for the resource group. This
is the default selection.
- Online On First Available Node
- Select this option to have the resource group activate on the
first participating node that becomes available. If you have configured
the settling time for resource groups, it is only used for this resource
group if you use this startup policy option.
- Online Using Node Distribution Policy
- Select this option to have the resource group brought online according
to the node-based distribution policy. This option allows only one
resource group to be brought online on each node during startup. Also,
if you are planning to use a single-adapter network that is configured
with IPAT by means of Replacement, then use this option as the startup
policy for your resource group. When you select this option, the Fallback
policy is set to Never fallback,
and you cannot select other options for this policy.
- Online On All Available Nodes
- Select this option to have the resource group brought online on
all nodes. Selecting this startup policy configures the resource group
as a concurrent resource group. If you select this option for the
resource group, ensure that resources in this group can be brought
online on multiple nodes simultaneously. When you select this option,
the Fallover policy is set to Bring
offline and the Fallback policy is
set to Never fallback, and you cannot select
other options for these policies.
- Fallover policy
- Fallover is the process of an active node acquiring resources
previously owned by another node, in order to maintain availability
of those resources. Select one of the following fallover policies
for your custom resource group:
- Fallover To Next Priority Node
- Select this option to have only one resource group brought online
for each node in the case of fallover, based on the default node priority
order specified in the node list for the resource group. This is the
default selection.
- Fallover Using Dynamic Node Priority
- Select this option to use one of the predefined dynamic node priority
policies. These dynamic node priority policies are based on RSCT variables,
such as the node with the most memory available. This option is not
available if the resource group has two or fewer nodes. If you select
this option and the resource group has mode than two nodes, you also
must specify the Dynamic node priority policy below.
- Bring Offline (On Error Node Only)
- Select this option to bring a resource group offline on a node
during an error condition. This option is most suitable when you want
to ensure that if a particular node fails, the resource group goes
offline only on that node but remains online on other nodes. This
option is available only when you select Online On All
Available Nodes as the Startup policy for
the resource group.
- Dynamic Node policy
- You can configure the fallover behavior of a resource group to
use one of five dynamic node priority policies that define how the
takeover node is chosen dynamically for the resource group. These
are based on RSCT variables such as the most memory or lowest use
of computer processing units (CPU). This policy determines the priority
order of nodes to be used in determining the destination of a resource
group during an event which causes the resource group to either move
or be brought online. To recover the resource group, PowerHA™ SystemMirror selects the node that
best fits the policy at the time of fallover. You must select a dynamic
node policy when you select Bring Offline (On Error Node
Only) as the Fallover policy for
the resource group.
Select one of the following dynamic node policies:
- Next node in the list
- Select this option to have the takeover node be the next node
in the node list for the resource group.
- Node with most available memory
- Select this predefined option to have the takeover node be the
one with the highest percentage of free memory.
- Node with most available CPU cycles
- Select this predefined option to have the takeover node be the
one with the most available processor time.
- Node with least busy disk
- Select this predefined option to have the takeover node be the
one with the least busy storage disk.
- Node with highest return value of DNP script
- Select this user-defined option to have the takeover node be the
one with the highest return value based on the dynamic node policy
(DNP) script that is specified for the resource group. If you select
this option, you must also specify a user-defined file or script to
execute below.
- Node with lowest non-zero return value of DNP script
- Select this user-defined option to have the takeover node be the
one with the lowest non-zero return value based on the dynamic node
policy (DNP) script that is specified for the resource group. If you
select this option, you must also specify a user-defined file or script
to execute below.
- Execute file or script
- Specify the full path and file name of a user-defined script to
use to determine how the takeover node is chosen dynamically for the
resource group. You must specify this information when you select
either the Highest return value of DNP script option
or the Lowest non-zero return value of DNP script option
for your Dynamic Node policy.
- Timeout (seconds)
- Specify the length of time that PowerHA SystemMirror
is to wait for the user-defined script to complete. If the script
does not complete in the specified time on a node, the script is stopped
with a return value of zero for that node. If the script does not
complete in the specified time on all nodes, the default node priority
is used to determine the takeover node.
- Fallback policy
- Fallback is the process in which a joining or reintegrating node
acquires resources previously owned by another node. Select one of
the following fallback policies for your custom resource group:
- Fallback To Higher Priority Node In The List
- Select this option to specify that the resource group is to fall
back when a higher priority node joins the cluster. If you select
this option, you can configure a Fallback timer to
determine a specific interval of time for which to delay when the
resource group is to fall back to its higher priority node. If you
do not configure a fallback timer, the resource group falls back immediately
when a higher priority node joins the cluster.
- Never Fallback
- Select this option to specify that the resource group is not to
fall back when a higher priority node joins the cluster.
- Fallback timer
- Select the appropriate time interval to use to delay when the
resource group is to fall back to its higher priority node. Using
a fallback timer is useful because you can plan for maintenance outages
associated with this resource group or for scheduling a fallback to
occur during off-peak business hours. You can select one of the following
options: Daily, Weekly, Monthly, Yearly or Specific
date. If you select the specific date option, you must
enter the date manually in the appropriate fields.
- Enable WPAR
- Select this option to enable Workload Partitions (WPAR), which
are virtualized operating system environments, created by software,
within a single instance of the AIX® operating
system. This option specifies that all PowerHA SystemMirror application servers
in the current resource group are to run in the specified WPAR. In
addition, all service label and file system resources that are part
of this resource group are assigned to the specified WPAR.
- WPAR name
- The WPAR name is the same as the resource group name and cannot
be changed.