Create Resource Group Wizard: Choose policies and attributes

Use this page to configure the startup policy, the fallback policy, and the fallover policies for the resource group. You also can specify whether to enable Workload Partitions (WPAR) for the resource group.

Fields

Startup policy
Select one of the following startup policies for the resource group:
Online On Home Node Only
Select this option to bring the resource group online only on its home (highest priority) node during startup of the resource group. To use this option, the highest priority node must be available, which is first node in the node list for the resource group. This option is the default selection.
Online On First Available Node
start of changeSelect this option to have the resource group activated on the first participating node that becomes available. If you configure the settling time for resource groups, this settling time is used only for this resource group if you use this startup policy option.end of change
Online Using Node Distribution Policy
start of changeSelect this option to have the resource group brought online according to the node-based distribution policy. This option allows only one resource group to be brought online on each node during startup. Use this option if you plan to use a single-adapter network that uses Inter-Packet Arrival Time with replacement. When you select this option, the Fallback policy property is set to the Never fallback option, and you cannot select other options for this policy.end of change
Online On All Available Nodes
Select this option to bring the resource group online on all nodes. Selecting this startup policy configures the resource group as a concurrent resource group. If you select this option for the resource group, ensure that the resources in this group can be brought online on multiple nodes simultaneously. When you select this option, the Fallover policy property is set to the Bring offline option. Also, the Fallback policy property is set to the Never fallback option. Additionally, you cannot select other options for these policies.
Fallover policy
Fallover is the process of an active node that acquires resources that are previously owned by another node to maintain availability of those resources. Select one of the following fallover policies for your custom resource group:
Fallover To Next Priority Node In The List
start of changeSelect this option to have only one resource group brought online for each node in the case of fallover. The resource group is brought online based on the default node priority order specified in the node list for the resource group. This option is the default selection.end of change
Fallover Using Dynamic Node Priority
Select this option to use one of the predefined dynamic node priority policies. These dynamic node priority policies are based on Reliable Scalable Cluster Technology (RSCT) variables, such as the node with the most memory available. This option is not available if the resource group has two or fewer nodes. If you select this option and the resource group has more than two nodes, you also must specify the Dynamic node priority policy property.
Bring Offline (On Error Node Only)
Select this option to bring a resource group offline on a node during an error condition. Use this option to ensure that if a particular node fails, the resource group goes offline only on that node, but remains online on other nodes. This option is available only when you select the Online On All Available Nodes option as the Startup policy property for the resource group.
Dynamic node priority policy
You can configure the fallover behavior of a resource group to use one of five dynamic node priority policies. These policies define how the takeover node is chosen dynamically for the resource group. These policies are based on RSCT variables, such as the most memory or lowest use of computer processing units (CPU). This policy determines the priority order of nodes to be used in determining the destination of a resource group during an event which causes the resource group to either move or be brought online. To recover the resource group, PowerHA® SystemMirror selects the node that best fits the policy at the time of fallover. You must select a dynamic node policy when you select the Bring Offline (On Error Node Only) option as the Fallover policy property for the resource group.

Select one of the following dynamic node policies:

Next node in the list
Select this option to have the takeover node be the next node in the node list for the resource group.
Most free memory
Select this option to have the takeover node be the one with the highest percentage of free memory.
Most processor time
Select this option to have the takeover node be the one with the most available processor time.
Least busy
Select this option to have the takeover node be the one with the least busy storage disk.
Highest return value Of DNP script
Select this option to have the takeover node be the one with the highest return value based on the dynamic node policy (DNP) script that is specified for the resource group. If you select this option, you must also specify a user-defined file or script.
Lowest non-zero return value of DNP script
Select this option to have the takeover node be the one with the lowest non-zero return value based on the dynamic node policy (DNP) script that is specified for the resource group. If you select this option, you must also specify a user-defined file or script.
Run file or script
Specify the full path and file name of a user-defined script to use to determine how the takeover node is chosen dynamically for the resource group. You must specify this information when you select either the Highest return value of DNP script option or the Lowest non-zero return value of DNP script option for your Dynamic Node policy property.
Timeout (seconds)
Specify the length of time that PowerHA SystemMirror is to wait for the user-defined script to complete. If the script does not complete in the specified time on a node, the script is stopped with a return value of zero for that node. If the script does not complete in the specified time on all nodes, the default node priority is used to determine the takeover node.
Fallback policy
Fallback is the process in which a joining or reintegrating node acquires resources previously owned by another node. Select one of the following fallback policies for your resource group:
Fallback To Higher Priority Node In The List
Select this option to specify that the resource group is to fall back when a higher priority node joins the cluster. If you select this option, you can configure the Fallback timer property. The fallback time determines a specific interval of time to delay before the resource group falls back to its higher priority node. If you do not configure a fallback timer, the resource group falls back immediately when a higher priority node joins the cluster.
Never Fallback
Select this option to specify that the resource group is not to fall back when a higher priority node joins the cluster.
Fallback timer
start of changeSelect the appropriate time interval to use to delay when the resource group is to fall back to its higher priority node. Using a fallback timer is useful when you plan for maintenance outages associated with this resource group. A fallback time also is useful when you plan to schedule a fallback to occur during off-peak business hours. You can select one of the following options: Immediately, Daily, Weekly, Monthly, Yearly, or Specific date. If you select the specific date option, you must enter the date manually.end of change
start of changeIntersite management policy end of change
start of changeSelect a policy for managing the resource group across sites. This property is available only if you are using PowerHA SystemMirror 7.1 Enterprise Edition.
Ignore
start of changeSelect this option to indicate that the resource group is not to have any online secondary instances. Select this option if sites or replicated resources are not configured. You can use this option if you use cross-site LVM mirroring. Additionally, you can use this option if you use High Availability Cluster Multi-Processing Extended Distance (HACMP/XD) for Metro Mirror.end of change
Prefer primary site
start of changeSelect this policy when you have resources that you want to be taken over by multiple sites in a prioritized manner. When a site fails, the active site with the highest priority acquires the resource. When the failed site rejoins the cluster, the site with the highest priority acquires the resource. end of change
Select this option to indicate that the primary instance of the resource group is to be brought online on the primary site at startup. Additionally, the secondary instance is to be started on the other site. The primary instance falls back when the primary site rejoins the cluster.
Online on either site
start of changeSelect this policy to have the primary instance of the resource group to be brought online on the first node, on either site, that meets the node policy criteria at startup. The secondary instance is to be started on the other site. The primary instance does not fall back when the original site rejoins the cluster.end of change
Online on both sites
Select this policy to indicate that the resource group is to be brought online on both sites at startup. To use this policy, the Startup policy property must be set to Online on All Available Nodes. When you select this option, the resource group cannot fall over or fall back. The resource group moves to another site only if no node or condition exists under which it can be brought or kept online on the site where it is currently located. The site that owns the active resource group is called the primary site.
end of change
Enable WPAR
Select this option to enable workload partitions (WPAR). Workload partitions are virtualized operating system environments, created by software, within a single instance of the AIX® operating system. This option specifies that all PowerHA SystemMirror application servers in the current resource group are to run in the specified WPAR. In addition, all service label and file system resources that are part of this resource group are assigned to the specified WPAR.
WPAR name
The WPAR name is the same as the resource group name and cannot be changed.