start of change

Edit Resource Group Advanced Properties

Use the Edit Advanced Properties task to change the Policies and Other Resource Group Settings for a resource group.

Fields

Startup policy
Select one of the following startup policies for the resource group:
Online on home node only
Select this option to have the resource group brought online only on its home (highest priority) node during startup of the resource group. This option requires that the highest priority node is available, which is first node in the node list for the resource group. This option is the default selection.
Online on first available node
Select this option to have the resource group activate on the first participating node that becomes available. If the settling time for resource groups is configured, it is only used for this resource group if you use this startup policy option.
Online using node distribution policy
Select this option to have the resource group brought online according to the node-based distribution policy. This option allows only one resource group to be brought online on each node during startup.

When you select this option, the Fallback policy is set to Never fallback, and you cannot select other options for this policy.

Online on all available nodes
Select this option to have the resource group brought online on all nodes. Selecting this startup policy configures the resource group as a concurrent resource group. If you select this option for the resource group, ensure that resources in this group can be brought online on multiple nodes simultaneously. When you select this option, the Fallover policy is set to Bring offline and the Fallback policy is set to Never fallback, and you cannot select other options for these policies.

All resources and applications managed by a concurrent resource group must also be concurrent capable.

Fallover policy
Fallover is the process of an active node that acquires the resources previously owned by another node to maintain the availability of those resources. Select one of the following fallover policies for your custom resource group:
Fallover to next priority node in the list
Select this option to have only one resource group brought online for each node in the case of fallover, based on the default node priority order specified in the node list for the resource group.

This option is the default selection.

Fallover using dynamic node priority
start of changeSelect this option to use one of the predefined dynamic node priority policies. These dynamic node priority policies are based on RSCT variables, such as the node with the most memory available. This option is not available if the resource group has two or fewer nodes. If you select this option and the resource group has more than two nodes, you also must specify a value for the Dynamic node priority policy field.end of change
Bring offline (On error node only)
Select this option to bring a resource group offline on a node during an error condition.

This option is most suitable when you want to ensure that if a particular node fails, the resource group goes offline only on that node but remains online on other nodes.

This option is available only when you select Online On All Available Nodes as the Startup policy for the resource group.

Dynamic node priority policy
You can configure the fallover behavior of a resource group to use one of five dynamic node priority policies that define how the takeover node is chosen dynamically for the resource group.

These are based on RSCT variables such as the most memory or lowest use of computer processing units (CPU).

This policy determines the priority order of nodes to be used in determining the destination of a resource group during an event which causes the resource group to either move or be brought online.

To recover the resource group, PowerHA® SystemMirror selects the node that best fits the policy at the time of fallover.

You must select a dynamic node policy when you select Bring offline (On error node only) as the Fallover policy for the resource group.

Select one of the following dynamic node policies:

Next node in the list
Select this option to have the takeover node be the next node in the node list for the resource group.
Node with most available memory
Select this predefined option to have the takeover node be the one with the highest percentage of free memory.
Node with most available CPU cycles
Select this predefined option to have the takeover node be the one with the most available processor time.
Node with least busy disk
Select this predefined option to have the takeover node be the one with the least busy storage disk.
Node with highest return value of DNP script
Select this user-defined option to have the takeover node be the one with the highest return value based on the dynamic node policy (DNP) script that is specified for the resource group.

If you select this option, you must also specify a user-defined file or script to execute.

.
Node with lowest non-zero return value of DNP script
Select this user-defined option to have the takeover node be the one with the lowest non-zero return value based on the DNP script that is specified for the resource group.

If you select this option, you must also specify a user-defined file or script to execute.

Run file or script
Specify the full path and file name of a user-defined script to use to determine how the takeover node is chosen dynamically for the resource group. You must specify this information when you select either the Highest return value of DNP script option or the Lowest non-zero return value of DNP script option for your Dynamic node policy.
Timeout (seconds)
Specify the length of time for the PowerHA SystemMirror to wait for the user-defined script to complete. If the script does not complete in the specified time on a node, the script is stopped with a return value of zero for that node. If the script does not complete in the specified time on all nodes, the default node priority is used to determine the takeover node.
Fallback policy
Fallback is the process in which a joining or reintegrating node acquires resources previously owned by another node. Select one of the following fallback policies for your custom resource group:
Fallback to higher priority node in the list
Select this option to specify that the resource group is to fall back when a higher priority node joins the cluster.

If you select this option, you can configure a Fallback timer to determine a specific interval of time for which to delay when the resource group is to fall back to its higher priority node.

If you do not configure a fallback timer, the resource group falls back immediately when a higher priority node joins the cluster.

Never fallback
Select this option to specify that the resource group is not to fall back when a higher priority node joins the cluster.
Fallback timer
Select the appropriate time interval to use to delay when the resource group is to fall back to its higher priority node.

Using a fallback timer is useful because you can plan for maintenance outages associated with this resource group or for scheduling a fallback to occur during off-peak business hours.

You can select one of the following options: Immediately, Daily, Weekly, Monthly, Yearly, or Specific date. If you select the specific date option, you must enter the date manually in the appropriate fields.

Note: This option is enabled only when Fallover Using Dynamic Node Priority is selected for the Fallover policy and when Highest return value of DNP script or Lowest non-zero return value of DNP script is selected for the Dynamic node priority policy.
Inter-Site management policy
Select the inter-site policy to use when the cluster has sites.
Online on either site (prefer primary)
Select this option to assign the primary resources so that they can take over multiple sites in a prioritized manner.

In a site failure, the active site with the highest priority acquires the resource. When the failed site rejoins, the site with the highest priority acquires the resource.

Ignore
Select this option if sites and replicated resources are not defined or being used.
Online on either site
Select this option to acquire resources by any site in its resource chain. When a site fails, the standby site with the highest priority acquires the resource. When the failed site rejoins, the resource remains with the new site.
Online on both sites
Select this option to acquire resources by both the primary and secondary site. The resources support concurrency.
File system options
This section is used to view and edit file system options.
File system consistency check
Identifies the method to check consistency of the file systems fsck (default) or logredo (for fast recovery). If you choose the logredo option and if the logredo function fails, the fsck command runs in its place.
File system recovery method

Identifies the recovery method for acquiring and releasing the file systems. The values are sequential (default) and parallel. Parallel can provide faster recovery, but cannot be used on shared or nested file systems.

File system mounted before IP configured
Specifies whether PowerHA SystemMirror takes over the volume groups and mounts the file systems from a failed node before or after taking over IP address or addresses of the failed node.

The default is false, meaning the IP address is taken over first. Similarly, upon reintegration of a node, the IP address is acquired before the file systems.

Set this field to true if the resource group contains file systems to export so that the file systems are available before NFS requests are received on the service address.

File systems/directories to NFS mount
Identifies the file systems or directories to NFS mount. All nodes in the resource chain attempts to NFS mount these file systems or directories when the owner node is active in the cluster.
Stable storage path (NFSv4)
This field contains the path where NFSv4 stable storage is stored. The path must belong to a file system managed by the resource group.

The path need not be an existing directory. PowerHA SystemMirror creates the path automatically.

If this field contains a non-empty value and the File systems/Directories to NFS mount field is blank, the contents of this field are ignored and a warning is displayed.

Preferred network
Select the network where you want to NFS mount the file systems from a list of previously defined IP networks.

This field is applicable only if a value is entered in the File systems/Directories to NFS mountfield.

The Service IP Labels/IP Addresses field must contain a Service IP label which is present on the network you select.

Note: You can specify more than one Service IP label in the Service IP Labels/IP Addresses field.
Select at least one entry of an IP label on the network.

If the network selected is unavailable and if the node is attempting to NFS mount, it looks for other defined, available IP networks in the cluster to establish the NFS mount.

end of change