IBM System Storage DS Storage Manager Installer package version 11.20.x5.10 for the 32-bit (IA32/X86), 64-bit x86_64 or X64 (AMD64 or EM64T) and 64-bit Linux on Power (LoP) versions of Linux operating systems (Please see Important note #2 below) Note: Linux host attachment to the DS5300, DS5100, DS5020, DS4800, DS4700, DS4300, DS4200, DS4100, and DS3950 requires the additional purchase of the IBM DS5300/DS5100/ DS5020/DS4800/DS4700/DS4300/DS4200/DS4100/DS3950 Host Kit Option or Feature Code. The IBM Linux Host Kit Option contains the required IBM licensing to attach a Linux Host System to the DS5300, DS5100, DS5020, DS4800, DS4700, DS4300, DS4200, DS4100, or DS3950 storage subsystems. Please contact your IBM service representatives or IBM resellers for purchasing information. Last Update: 09/09/2015 Important: 1. A problem found with Red Hat Enterprise Linux 7 on SAS HICs, during a controller firmware upgrade, I/O failures occur when the multipathd fails to add new paths into the device map after the first controller activation. I/O failure occurs when the second controller reboots during activation. Bugzilla # 1015601 has been created with Red Hat. WORKAROUND: You can avoid this issue by either halting I/O operations prior to performing the upgrade or enabling the queue_if_no_path feature by setting the no_path_retry parameter to queue in multipath.conf. If you make changes to the queue_if_no_path setting, make sure to reload multipathd service before you initiate the controller firmware upgrade process. With these settings, I/O's will be held up in multipath until all paths are restored during the upgrade. You can then run the rescan-scsi-bus.sh command after the upgrade to see that all paths are restored correctly. After the upgrade, you can restore both settings; that is, disable the queue_if_no_path feature by command (dmsetup message mpath-device-name 0 fail_if_no_path), set the no_path_retry parameter to 30 in the multipath.conf file and reload multipathd service. This workaround will only help if application timeouts are much higher than controller firmware upgrade times. If application timeouts are lower, then applications will error out. If this issue occurs, you can recover by running the rescan-scsi-bus.sh command after the upgrade to confirm that paths are restored correctly. If needed, run the service multipathd reload command to allow fresh creation of device maps. Restart I/O. 2. A problem was found on Red Hat Enterprise Linux 6.5 Kernel 2.6.32-431.el6.x86_64 particularly with the Emulex Fibre Channel HBA 1600x with the inbox driver 8.3.7.21.4p, a path failover due to a controller reboot might cause the HBA to return an error of unsafe-linked-list and might ultimately cause the server to panic and reboot. Bug 1071656 has been submitted to Red Hat. WORKAROUND: The host reboot will resolve the issue and the host will return to optimal mode. 3. A problem was found on Red Hat Enterprise Linux 6.5 Kernel 2.6.32-431.el6.x86_64 particularly with the Emulex Fibre Channel HBA 1600x with the inbox driver 8.3.7.21.4p, The Emulex driver causes a RHEL OS kernel panic because the Emulex FC HBA hardware produces link errors. WORKAROUND: None. 4. A problem found with Red Hat Enterprise Linux. During an online controller firmware upgrade with a large number of volumes (for example, 256 volumes per host), random file system(s) may get unmounted resulting in I/O errors. This problem occurs because udisks-daemon automatically unmounts the file systems during controller firmware upgrades. Bugzilla number 1103362 has been opened with Red Hat. WORKAROUND: If an application error occurs or root filesystem runs out of space or is mounted as read-only, you can perform following recovery options: - If root device is not on SAN, run the command #remount -o remount,ro /dev/{device_name} /{mount_point} - If the boot device is on the storage array, reboot the host. After you can access the root file system again, identify any new files that were created under the mount directory while actual file system was unmounted. Based on the application needs, restore these files to a temporary location before the actual file system is mounted again at the same location. This enables you to free up space on root file system and to restore the application data to original file system device again. Uninstall the udisks package from the system. 5. A problem causing recursive reboots exists while using 7.36.08 and 7.36.12 firmware on IBM System Storage DS4000 or DS5000 systems. This problem is fixed in 7.36.14.xx and above firmware. All subsystems currently using 7.36.08 and 7.36.12 firmware MUST run a file system check tool (DbFix) before and after the firmware upgrade to 7.36.14.xx or higher. Instructions for obtaining and using DbFix are contained in the 7.36.14.xx or higher firmware package. Carefully read the firmware readme and the DbFix instructions before upgrading to firmware 7.36.14.xx or higher. For subsystems with firmware level 7.36.08 or 7.36.12, configuration changes should be avoided until a firmware upgrade to 7.36.14.xx or higher has been completed successfully. Subsystems not currently using 7.36.08 or 7.36.12 do not need to run DbFix prior to upgrading to 7.36.14.xx or higher. DbFix may be run after upgrading to 7.36.14.xx or higher, but it is not required. DbFix is only applicable to subsystems using 7.36.xx.xx or greater firmware. If problems are experienced using DbFix or the resulting message received is Check Failed, DO NOT upgrade your firmware and contact IBM support before taking any further actions. 6. Starting with the IBM DS Storage Manager host software version 10.8x, there are only three separate Storage Manager host software installer packages for various CPU-platform versions of the Linux operating system. There will not be any Storage Manager host software package for the Linux operating systems based on the Itanium IA64 64-bit processors. The three supported CPU-platforms are as followed: 1. 32-bit IA32 or x86 2. 64-bit (EMT64 or AMD64) x86_64 3. 64-bit Linux on PPC processors (LoP) Please select the correct Storage Manager host software installer package for the CPU-platform of your Linux operating system (OS). Note: The 32-bit x86 and 64-bit x86_64 CPU-platform Linux operating systems share the same Storage Manager host software installer package in IBM DS Storage Manager version 10.70 and earlier. 7. This storage manager software package contains non-IBM code (Open Source code.) Please review and agree to the Non-IBM Licenses and Notices terms stated in the DS_Storage_Manager_Non_IBM_Licenses_and_Notices_v3.pdf file before use. (C) Copyright International Business Machines Corporation 1999, 2015. All rights reserved. US Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Note: 1. Before using this information and the product it supports, read the general information in section 6 "Trademarks and Notices" in this document. 2. Microsoft Windows Notepad might not view this readme properly, if you must view this on a Windows system use Wordpad instead to view this readme. 3. Refer to the IBM System Storage Support Web Site or CD for the IBM System Storage DS Storage Manager version 10 Installation and Host Support Guide. This guide along with the Storage Manager program Online Help provide the installation and support information. This guide also provides information on other DS3000, DS4000 and DS5000 related publications. Please refer to corresponding Change History document for more information on new features and modifications. ======================================================================= CONTENTS ----------------- 1.0 Overview 2.0 Installation and Setup Instructions 3.0 Configuration Information 4.0 Unattended Mode 5.0 Web Sites and Support Phone Number 6.0 Trademarks and Notices 7.0 Disclaimer ======================================================================= 1.0 Overview ------------------- 1.1 Overview ------------------- The 11.20 version of the IBM DS Storage Manager Host software for the 32-bit and 64-bit version of Linux operating systems is required for managing all DS3500,DCS3700 and DCS3860 storage models with controller firmware version 08.20.xx.xx or higher. In addition, it is also recommended for managing models with controller firmware version 6.5x.xx.xx or higher installed. Important: Please refer to the "IBM System Storage DS Storage Manager Version 11.2 Installation and Host Support Guide" located at: http://www.ibm.com/systems/support/ for all installation and support notes pertaining to Linux. Notes: 1. There are only three separate IBM DS Storage Manager host software packages for Linux for various CPU-platform of the Linux operating systems -32- bit IA32 or x86, 64-bit (EMT64 or AMD64) x86_64 and 64-bit Linux on PPC processor (LoP). Please select the appropriate host software package for your operating system environment. The host software and storage subsystem firmware files are also packaged separately and are available for download from the IBM System Storage Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 2. The IBM Storage Manager host software version 11.2x new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. 3. The latest version of the IBM System Storage DS Storage Manager Version 11.2 Installation and Host Support Guide is also available on IBM's Support web site as a downloadable Portable Document format(PDF) file. 4. Please refer to the IBM System Storage Interoperation Center located at http://www-03.ibm.com/systems/support/storage/config/ssic for information pertaining to supported Linux kernels, HBAs, and multipath drivers that are supported. PRODUCTS SUPPORTED ----------------------------------------------------------------- | New Model | Old Model | Machine Type | Model | |----------- |-----------|--------------|-------------------------| | DS5300 | N/A | 1818 | 53A | |----------- |-----------|--------------|-------------------------| | DS5100 | N/A | 1818 | 51A | |----------- |-----------|--------------|-------------------------| | DS5020 | N/A | 1814 | 20A | |----------- |-----------|--------------|-------------------------| | DS4800 | N/A | 1815 | 82A, 82H, 84A, 84H, | | | | | 88A, 88H, 80A, 80H | |----------- |-----------|--------------|-------------------------| | DS4700 | N/A | 1814 | 70A, 70H, 72A, 72H, | | | | | 70T, 70S, 72T, 72S, | |----------- |-----------|--------------|-------------------------| | DS4500 | FAStT 900 | 1742 | 90X, 90U | |----------- |-----------|--------------|-------------------------| | DS4400 | FAStT 700 | 1742 | 1RX, 1RU | |----------- |-----------|--------------|-------------------------| | DS4300 | FAStT 600 | 1722 | 60X, 60U, 60J, 60K | | | | | 60L | |----------- |-----------|--------------|-------------------------| | DS4200 | N/A | 1814 | 7VA, 7VH | |----------- |-----------|--------------|-------------------------| | DS4100 | FAStT 100 | 1724 | 100, 1SC | |----------- |-----------|--------------|-------------------------| | DS3950 | N/A | 1814 | 94H, 98H, | |------------|-----------|--------------|-------------------------| | DCS3860 | N/A | 1813 | 86C, 96C | |------------|-----------|--------------|-------------------------| | DCS3700 | N/A | 1818 | 80C, 90C | |----------- |-----------|--------------|-------------------------| | DS3500 | N/A | 1746 | C2A, A2S, A2D, C4A, | | | | | A4S, A4D | |----------- |-----------|--------------|-------------------------| | DS3200 | N/A | 1726 | 21X, 22X, 22T, HC2, HC6 | |----------- |-----------|--------------|-------------------------| | DS3300 | N/A | 1726 | 31X, 32X, 32T, HC3, HC7 | |----------- |-----------|--------------|-------------------------| | DS3400 | N/A | 1726 | 41X, 42X, 42T, HC4, HC8 | |----------- |-----------|--------------|-------------------------| | Boot DS | N/A | 1726 | 22B, HC3 | ----------------------------------------------------------------- ATTENTION: 1. For the DS4400 and the DS4100 storage subsystems - all models (Standard /dual controller and Single Controller), the controller firmware version 06.12.xx.xx and later must be used. 2. The DS4300 with Single Controller option(M/T 1722-6LU,6LX,and 6LJ), FAStT200 (M/T 3542-all models) and FAStT500 (M/T 3552-all models) Storage subsystems can no longer be managed by DS Storage Manager Version 10.50.xx.23 and higher. 3. For the DS3x00 storage subsystems, please refer to the readme files that are posted in the IBM DS3000 System Storage support web site for the latest information about their usage, limitations or configurations. http://www.ibm.com/systems/support/storage/disk ======================================================================= 1.2 Limitations --------------------- IMPORTANT: The listed limitations are cumulative. However, they are listed by DS3000/DS4000/DS5000 storage subsystem controller firmware and Storage Manager host software releases to indicate which controller firmware and Storage Manager host software release that they were first seen and documented. New limitations with Storage Manager version 11.20.x5.10 (controller firmware 08.20.xx.xx). 1. Volumes might not stay on the preferred path during an online controller firmware upgrade. In rare circumstances, a volume might transfer to the alternate path during a controller firmware upgrade. If this happens, the storage array is marked Degraded and the Recovery Guru in SANtricity Storage Manager reports Volume not on preferred path. WORKAROUND. After completing the controller firmware upgrade process, if the Recovery Guru reports Volume Not on Preferred Path, check the volume ownership and redistribute volumes, if necessary. You can redistribute volumes in the Storage Manager by selecting Storage > Volume > Advanced > Redistribute Volumes. 2. After a power loss condition, controllers are unable to complete start-of-day (SOD) operations when power is restored. Occasionally, when power is restored following a power loss, the controllers are unable to complete start-of-day (SOD) operations. When this occurs, hosts are not able to access the storage system, and you cannot manage the storage array using Storage Manager. WORKAROUND. To recover from this condition, you must power the controller-drive tray off and back on. When you power the storage array back on, it completes SOD and returns to an optimal state. 3. Choosing a custom segment size. When you create a volume, the segment size is automatically chosen based upon your selection of the volume I/O characteristics type (File system, Database, Multimedia, custom). Large segment sizes (256 KB and larger) should only be chosen if you know that the I/O profile for that volume will be large, sequential reads and writes. WORKAROUND. If you anticipate an I/O profile with frequent non-sequential single-block writes, choose a smaller (for example 32 kb) segment size for your volume configuration. 4. Inconsistency in volume ownership transfer occurs during a parallel drive firmware update (PDFU) During a PDFU, you can transfer all volumes in a volume group or DDP, but you cannot transfer a single volume. WORKAROUND. Wait until PDFU completes before attempting a single volume ownership transfer. 5. A suspect drive is marked as failed unexpectedly. Rarely, if a controller initiates a forced volume transfer while a drive is powered down, the drive might be unexpectedly marked as failed. This will only happen during forced volume transfers where the owning controller fails or is removed during the power cycle. Host requested volume transfers do not cause this issue because I/O is quested before transferring the volume. You might see a degraded or failed volume depending on data protection scheme and redundancy level at that point in time. WORKAROUND. Replace the failed drive. 6. The event log reports a transient Out of Compliance event during a controller reboot. In rare cases, a critical Out Of Compliance event (0x2900) is logged during a controller reboot, followed by an event (0x2901) stating the condition has been cleared. WORKAROUND. None required. 7. A problem found with Red Hat Enterprise Linux 7 on SAS or IB HICs, during a controller firmware upgrade, I/O failures occur when the multipathd fails to add new paths into the device map after the first controller activation. I/O failure occurs when the second controller reboots during activation. Bugzilla # 1015601 has been created with Red Hat. WORKAROUND: You can avoid this issue by either halting I/O operations prior to performing the upgrade or enabling the queue_if_no_path feature by setting the no_path_retry parameter to queue in multipath.conf. If you make changes to the queue_if_no_path setting, make sure to reload multipathd service before you initiate the controller firmware upgrade process. With these settings, I/O's will be held up in multipath until all paths are restored during the upgrade. You can then run the rescan-scsi-bus.sh command after the upgrade to see that all paths are restored correctly. After the upgrade, you can restore both settings; that is, disable the queue_if_no_path feature by command (dmsetup message mpath-device-name 0 fail_if_no_path), set the no_path_retry parameter to 30 in the multipath.conf file and reload multipathd service. This workaround will only help if application timeouts are much higher than controller firmware upgrade times. If application timeouts are lower, then applications will error out. If this issue occurs, you can recover by running the rescan-scsi-bus.sh command after the upgrade to confirm that paths are restored correctly. If needed, run the service multipathd reload command to allow fresh creation of device maps. Restart I/O. 8. A problem was found on Red Hat Enterprise Linux 6.5 Kernel 2.6.32-431.el6.x86_64 particularly with the Emulex Fibre Channel HBA 1600x with the inbox driver 8.3.7.21.4p, a path failover due to a controller reboot might cause the HBA to return an error of unsafe-linked-list and might ultimately cause the server to panic and reboot. Bug 1071656 has been submitted to Red Hat. WORKAROUND: The host reboot will resolve the issue and the host will return to optimal mode. 9. A problem was found on Red Hat Enterprise Linux 6.5 Kernel 2.6.32-431.el6.x86_64 particularly with the Emulex Fibre Channel HBA 1600x with the inbox driver 8.3.7.21.4p, The Emulex driver causes a RHEL OS kernel panic because the Emulex FC HBA hardware produces link errors. WORKAROUND: None. 10. A problem found with Red Hat Enterprise Linux. During an online controller firmware upgrade with a large number of volumes (for example, 256 volumes per host), random file system(s) may get unmounted resulting in I/O errors. This problem occurs because udisks-daemon automatically unmounts the file systems during controller firmware upgrades. Bugzilla number 1103362 has been opened with Red Hat. WORKAROUND: If an application error occurs or root filesystem runs out of space or is mounted as read-only, you can perform following recovery options: - If root device is not on SAN, run the command: #remount -o remount,ro /dev/{device_name} /{mount_point} - If the boot device is on the storage array, reboot the host. After you can access the root file system again, identify any new files that were created under the mount directory while actual file system was unmounted. Based on the application needs, restore these files to a temporary location before the actual file system is mounted again at the same location. This enables you to free up space on root file system and to restore the application data to original file system device again. Uninstall the udisks package from the system. No new limitations with Storage Manager version 10.86.x5.43 release (controller firmware 07.86.xx.xx) No new limitations with Storage Manager version 10.86.xx05.0035 release (controller firmware 07.86.xx.xx) New limitations with Storage Manager version 10.86.xx05.0028 release (controller firmware 07.86.xx.xx) 1. When using OLH with JAWS screen reader, will have difficulty on navigating through the content in the Index Tab under Help content window due to incorrect and duplicate reading of the text. Please use the search/find tab in the OLH. (LSIP200331090) 2. When using the accessibility software JAWS 11 or 13, may hear the screen reading of a background window, even if the dialog is not in focus. Please use the INSERT+B key to get the reading reinitiated for the dialog in focus.(LSIP200329868) 3. Will not be able to find Tray tab on storage array profile dialog launch. User needs to navigate to Hardware tab to find Tray tab.(LSIP200332950) 4. Will not be able to perform multiple array upgrades having different firmware versions. Please choose to upgrade arrays having different firmware versions separately.(LSIP200335962) 5. May hit IO error if all the paths are lost due delayed uevent, Please always make sure to run 'multipath -v0' command to rediscover the returning path. This will prevent the host from encountering an IO error due to the host losing all paths should the alternate path fails. (LSIP200347725) 6. There may be some confusion when the data is compared between summary tab and storage array profile. There is no actual functionality issue. The way the contents are labelled is not consistent.(LSIP200354832) New limitations with Storage Manager version 10.84.xx.30 release (controller firmware 07.84.xx.xx). 1. On a more than 30 drive disk pool with T10-PI enabled, the reconstruction progress indicator never moves because a finish notification is not sent to Storage Manager. When seeing this issue, just putting the replacement drive in the storage subsystem will allow the reconstruction to resume. (LSIP200298553) No new limitations with Storage Manager version 10.83.xx.23 release (controller firmware 07.83.xx.xx). New limitations with Storage Manager version 10.83.xx.18 release (controller firmware 07.83.xx.xx). 1. A non-T10PI logical drive can not be "volumecopied" to a logical drive with T10 PI functionality enabled. A non-T10 PI logical drive can only be volumeCopied to a non-T10 PI logical drive.(LSIP200263988) 2. Initiating dynamic logical drive expansion (DVE) on logical drives that are part of an asynchronous enhanced remote mirroring (ERM) without write order consistency will result in an error that is misleading because the controller is sending incorrect return code. DVE can only perform on logical drives having asynchronous ERM with write order consistency or synchronous ERM relationship.(LSIP200287980) 3. Cache settings can not be updated on thin logical drives. They can only be updated on the repository drives that are associated with these thin logical drives.(LSIP200288041) 4. Wait at least two minutes after canceling a logical drive creation in the disk pool before deleting any logical drives just created in the disk pool from the cancelled logical drive creation process. (LSIP200294588) 5. T10PI errors were incorrectly reported in the MEL log for T10PI- enabled logical drives that are participated in an enhanced remote mirroring relationship during the logical drive initializations.(LSIP200296754) 6. Having more than 20 SSD drives in a storage subsystem will result in SSD premium feature "Out-of-Compliance" error. One has to remove the extra SSD drives to bring the total number of SSDs in the storage subsystem to 20 or less.(LSIP200165276) 7. Pressing shift + F10 does not display the shortcut or context menu for the active object in the Storage Manger host software windows. The work-around is to use right-click on the mouse or the Windows key on the keyboard. (LSIP200244269) 8. Storage Manager subsystem management window performance might be slow due to Java runtime memory leak. The work around is to close the Storage Manager client program after the management tasks are completed. (LISP200198341) New limitations with Storage Manager version 10.77.xx.28 release (controller firmware 07.77.xx.xx). 1. Only one EXP5060 drive slot with a 3 TB SATA drive inserted can have the ATA translator firmware updated at any one time if the inserted 3 TB drive has "Incompatible" status. Simultaneously updating ATA translator firmware on multiple 3 TB drives having an "Incompatible" status might result in an "interrupted" firmware download state that requires the power- cycling of the subsystem to clear it. This limitation is reduced with the Storage Manager version 10.83 and later. The timeout was increased to accommodate simultaneously updating ATA translator firmware on up to five 3 TB SATA drives having an "Incompatible". 2. The 3 TB NL SAS drive can only be used to create non-T10PI arrays and logical drives. However, in certain conditions, where there are not any available hot-spare drives for T10PI enabled arrays, it will be used by the controller as a hot-spare for a failed drive in these T10PI enabled arrays. The controller will operate properly in this scenario and there are no adverse affects to the system while the 3 TB NL SAS drive is being used as the hot spare in a T10PI enabled array. This limitation is removed with controller firmware version 7.83.xx.xx and later. Please upgrade the controller firmware version 7.83.xx.xx and later if there is need to create a T10 PI-enabled arrays using 3TB NL SAS drives. New limitations with Storage Manager version 10.77.xx.16 release (controller firmware 07.77.xx.xx). 1. When you switch between the Search tab and the TOC tab, the topic that was open in the Search tab is still open in the TOC tab. Any search terms that were highlighted in the Search tab are still highlighted in the TOC tab. This always happens when you switch between the Search tab and the TOC tab. Workaround: Select another topic in the TOC to remove all search term highlighting from all the topics. 2. Support tab in the AMW kept open for longer duration will result in display of unevenly spaced horizontal marks. If the storage Manager is kept open for long durations a few grey lines may be seen on the Support Tab after restoring the AMW. Re-launching the AMW eliminates this problem. 3. Kernel panic while loading RedHat 6.0 on Power PC installation. When trying to load the RHEL6 LoP installation to the system that has existing old OS(LPAR), the system panics. Workaround: For a rack system with an HMC, delete the LPAR and recreate a new one for the installation to work. For a rack system without an HMC and the Power Blade system, the user needs to go to Open Firmware Prompt and type in “dev nvram?and “wipe- nvram? The system will reboot afterward and need to process normal RHEL6 installation. 4. Node unfencing can fail during RHCS startup when automatically generated host keys are used. The cluster manager service (cman) will fail to start, and the user will see the error message “key cannot be zero?in the host log. A user is somewhat likely to hit the issue if the cluster.conf file uses does not have host keys defined manually. If the user forgoes use of SCSI reservation fencing altogether, and relies only on power fencing, they will not experience this issue. Bugzilla 653504 was submitted to RedHat for this issue. 5. RHCS services with GFS2 mounts cannot transfer between nodes when client mounts with NFSv4. When attempting to transfer a cluster service manually, while a client was connected using NFS version 4, the GFS2 mount points failed to unmount. This caused the service to go to the “failed? state. The mount point, along with any other mount points exported from the same virtual IP address, will become inaccessible. Not likely to happen if the user configures the cluster nodes not to allow mount requests from NFS version 4 clients. Red Hat Bugzilla #654333 was opened to discover the root cause and fix the problem 6. The storage subsystem configuration script contains the logical drive creation commands with incorrect T10PI parameter. The workaround is to manually edit the file to change instances of dataAssurance parameters to T10PI. 7. The DS3500 subsysem does not support External Key Management at this time. Please contact IBM resellers or represeantives for such support in the future. Please refer to the file LegacyStorageManagerLimitations-Linux.chg in the Storage Manager host software download package or in the !Readme File for information on the legacy limitations that might still be applicable for your environment. ======================================================================= 1.3 Enhancements --------------------------- The DS Storage Manager version 11.20.x5.10 host software in conjunction with controller firmware version 8.20.xx.xx and higher provide the following new functions: - A media scan with redundancy check is enabled by default when a volume is created. - The storage capacity reserved for configuration information (and other system data) on each drive in a volume group or a disk pool is increased from 500 MB to 5.5 GB. - The storage management software now supports the DCS3700 and DCS3860 storage systems with Gen2 controllers. - Adds the ability to update drive firmware while the storage array is operating and online. All selected drives are updated while the volumes stay online and the storage array responds normally to read and write requests. - 4K sector drives are supported through 512 emulation. Previous releases use the default 512 byte sector size. Starting with this release, 4K sector drives can be also used, this is possible since these drives are exposed as 512 byte sector size (512 byte emulation or 512e). - When a disk pool is being created, configuration candidates that offer tray loss protection or drawer loss protection are now identified and listed first. Tray loss protection and drawer loss protection ensure accessibility to your data if a total loss of communication occurs with a single drive tray or drawer. Your storage array must have six trays or drawers to be capable of this protection. Also, this implementation is focused on initial configuration; the tray loss protection or drawer loss protection is not maintained when additional drives are added to the disk pool. For more information, please view the IBM System Storage DS Storage Manager Version 11.2 Installation and Host Support Guide and the IBM System Storage DS Storage Manager Version 11 Copy Services User's Guide. IMPORTANT: Do not use Storage Manager host software version 10.77.x5.xx and earlier to manage storage subsystems with 3 TB NL SAS and SATA drive option installed. ======================================================================= 1.4 Level Recommendation and Prerequisites for the update ----------------------------------------------------------- Note: The IBM Storage Manager host software version 11.20 for various cpu- platforms of the Linux OS new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. The minimum hardware requirement for the IBM DS Storage Manager host software for Linux operating system is as followed: - For 32-bit Intel architecture (x86) processors, a minimum of 1024MB memory with 1.0Ghz processors - For 64-bit x86_64 (AMD64 and EM64T) processor, a minimum of 1024MB memory with any 64-bit x86_64 (AMD64 and EM64T) processors. - For 64-bit PPC (Linux on Power) processor, a minimum of 1024MB memory with any 64-bit PPC processors. (also see Dependencies section) Code levels at time of release are as follows ------------------------------------------------------------------ IMPORTANT: There are three separate IBM DS Storage Manager host software version 11.20 packages for Linux operating system environments. Host software packages exist for each of the following Linux operating systems 1. 32-bit X86 2. 64-bit x86_64 or X64 (AMD64 or EM64T). 3. Linux on POWER (LoP). Please use the correct host software package for your Linux operating system environment. Note: The 32-bit x86 and 64-bit x86_64 CPU-platform Linux operating systems share the same Storage Manager host software installer package in IBM DS Storage Manager version 10.70 and earlier. Starting with the IBM DS storage manager version 9.16, all of the host software packages (SMruntime, SMclient, SMesm, SMutil and SMagent) are included in a single IBM DS Storage Manager host software installer wizard. During the execution of the wizard, the user will have a choice to install all or only certain software packages depending on the need for a given server. Note: The Linux MPP/RDAC multipath driver package is not part of this installer wizard. It must be installed separately, if needed. For Linux servers without the graphics adapter, individual host software installation packages are provided in the DS Storage Manager Version 10.86 host software package. These individual host software installation packages can also be downloaded at the IBM System Storage?Disk Storage Systems Technical Support web site. If you wish to use the wizard but do not have a graphics adapter, you can execute the installer via sh -i console to run in console mode. The version of the host software installer wizard for this release for various versions of the IBM DS Storage Manager version 11.20.x5.10 for Linux operating system depending on the CPU platforms is as followed: 1. 32-bit IA32 or x86 SMIA-LINUX-11.20.0A05.0002.bin. This installer wizard will install the following version of the host- software packages SMruntime: 11.20.xx00.0006 SMclient: 11.20.0G00.0006 SMesm: 11.20.0G00.0002 SMagent: 11.20.0A00.0003 SMutil: 11.20.0A00.0002 Individual host software rpm package names: SMruntime: SMruntime-LINUX-11.20.0A05.0002-1.i586.rpm SMclient : SMclient-LINUX-11.20.0G05.0002-1.noarch.rpm SMesm : SMesm-LINUX-11.20.0G05.0003-1.noarch.rpm SMUtil : SMutil-LINUX-11.20.0A05.0002-1.i386.rpm SMAgent : SMagent-LINUX-11.20.0A05.0003-1.i386.rpm 2. 64-bit x86_64 or x64 (EM64T or AMD64) SMIA-LINUXX64-11.20.0A05.0002.bin. This installer wizard will install the following version of the host- software packages SMruntime: 11.20.xx00.0006 SMclient: 11.20.0G00.0006 SMesm: 11.20.0G00.0002 SMagent: 11.20.0A00.0003 SMutil: 11.20.0A00.0002 Individual host software rpm package names: SMruntime: SMruntime-LINUX-11.20.0A05.0002-1.x86_64.rpm SMclient : SMclient-LINUX-11.20.0G05.0002-1.noarch.rpm SMesm : SMesm-LINUX-11.20.0G05.0003-1.noarch.rpm SMUtil : SMutil-LINUX-11.20.0A05.0002-1.x86_64.rpm SMAgent : SMagent-LINUX-11.20.0A05.0003-1.x86_64.rpm 3. Linux on POWER (LoP) SMIA-LINUXPPC-11.20.0A05.0002.bin. This installer wizard will install the following version of the host- software packages SMruntime: 11.20.xx00.0006 SMclient: 11.20.0G00.0006 SMesm: 11.20.0G00.0002 SMagent: 11.20.0A00.0003 SMutil: 11.20.0A00.0002 Individual host software rpm package names: SMruntime: SMruntime-LINUX-11.20.0A05.0002-1.ppc64.rpm SMclient : SMclient-LINUX-11.20.0G05.0002-1.noarch.rpm SMesm : SMesm-LINUX-11.20.0G05.0003-1.noarch.rpm SMUtil : SMutil-LINUX-11.20.0A05.0002-1.ppc64.rpm SMAgent : SMagent-LINUX-11.20.0A05.0003-1.ppc64.rpm Note: The SMagent package can be installed when the Linux RDAC is configured as multipath (Failover/Failback) driver. It should not be installed if the FC HBA Linux failover device driver is used as multipath driver. When using Linux RDAC driver as the multipath failover/failback driver, the non-failover version of the Linux FC HBA device driver must be installed instead of the failover version. The minimum IBM DS controller firmware version that the Linux RDAC driver supported was 05.4x.xx.xx. Refer to the README that is part of the Linux RDAC device driver package for installation instructions. The latest version of these drivers are as follows: - IBM IBM DS storage subsystem Linux RDAC: 09.00.A5.22 (for 2.4 kernel only) rdac-LINUX-09.00.A5.22-source.tar.gz 09.03.0B05.0439 (for RHEL 4 update 8) rdac-LINUX-09.03.0B05.0439-source.tar.gz 09.03.0C05.0642 (for RHEL 5.0 update 5, RHEL 6u2, SLES10 SP3 and SLES11 SP2) rdac-LINUX-09.03.0C05.0642-source.tar.gz Note: For other Linux 2.4 and 2.6 kernel operating system (OS) environments, please refer to the IBM System Storage Interoperation Center (SSIC) web site for the supported IBM DS Linux RDAC package for that OS environment. Supported Linux OS kernels ---------------------------- - SLES 10-SP4: 2.6.16.60-0.85.1 - SLES 11.3 : 3.0.13-0.27.1 (support ALUA with in-distro DM-MP driver+kernel patches) - SLES 12 : 3.0.13-0.27.1 (support ALUA with in-distro DM-MP driver) - RHEL5-u11 : 2.6.18-308 - RHEL6-u5 : 2.6.32-220 (support ALUA with in-distro DM-MP driver) - RHEL6-u6 : 2.6.32-279 (support ALUA with in-distro DM-MP driver) ======================================================================= 1.5 Dependencies -------------------------- ATTENTION: 1. The DS5020, DS3950, EXP5000, EXP810, EXP520, and EXP395 FC-SAS drives must have the FC-SAS interposer firmware version 2264 or later installed. Some drives with FC-SAS interposer firmware version earlier than version 2264 may report incorrect inquiry information during Start of Day. This causes the controller firmware to make the drive uncertified (incompatible), which in turn, causes the drive to no longer be accessible for I/Os. This condition will also cause the associated array to go offline. Since the array is no longer accessible, the controller firmware may not recover the cache for all LUNs under this array. This behavior may result in data not being written. If you believe that you have encountered this issue, please call IBM Support for assistance with the recovery actions. The FC-SAS interposer firmware version 2264 and later is available in the ESM/HDD firmware package version 1.78 or later. 2. The 3 TB SATA drive option for EXP5060 expansion enclosure requires ATA firmware version LW1613 and higher. The drive will be shown as "Incompatible" if it is installed in an EXP5060 drive slot with ATA translator firmware version older than LW1613. Please refer to the latest EXP5060 Installation, Users and Maintenance Guide for more information on working with the 3 TB SATA drive option. 3. The Storage Manager host software version 10.83.x5.18 or higher is required for managing storage subsystems with 3 TB NL FC-SAS drives. Storage manager version 10.83.x5.18 or higher in conjunction with controller firmware version 7.83.xx.xx and later allow the creation of T10PI-enabled arrays using 3 TB NL FC-SAS drives. 4. Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the DS3000/DS4000/DS5000 drive expansion enclosure ESM, the DS3000/DS4000/DS5000 storage subsystem controller and the hard drive firmware. 5. Standard installation order for Storage Manager 11.20.xx.xx and controller firmware 08.20.xx.xx: 1. SMruntime - always first 2. SMesm - required by client 3. SMclient 4. SMagent 5. SMutil 6. Controller firmware and NVSRAM 7. ESM firmware 8. Drive firmware IBM DS Storage Manager version 11.20 host software requires the DS3000 /DS4000/DS5000 storage subsystem controller firmware be at version 06.5x.xx.xx or higher. The IBM IBM DS Storage Manager v9.60 supports storage subsystems with controller firmware version 04.xx.xx.xx up to 05.2x.xx.xx. The IBM DS Storage Manager v10.36 supports storage subsystems with controller firmware version 05.3x.xx.xx to 07.36.xx.xx. The IBM DS Storage Manager v10.70 supports storage subsystems with controller firmware version 05.4x.xx.xx to 07.70.xx.xx. ======================================================================= 1.6 Level Recommendation and Prerequisites for the update ----------------------------------------------------------- IMPORTANT: Always verify your configuration with respect to interoperability at: http://www-03.ibm.com/systems/support/storage/config/ssic/ IMPORTANT: Always refer to the http://www.ibm.com/support/ IBM System Storage?Disk Storage Systems Technical Support web site for the latest released code levels. It is recommended that you use host type LINUX for non-clustered Linux hosts connected to your storage subsystem. The failover driver recommendation is MPP/RDAC which requires that AVT be disabled. Anytime you download new NVSRAM you must re-enable any changes that you have made. Prior NVSRAM files had Linux and LinuxCLVMWare host types. The Linux host type had AVT enabled. That host definition is now named LNXAVT. There is a new host type named Linux that has AVT disabled by default. If you wish to change your host region definition so that you do not have to run the DisableAVT script every time you load new NVSRAM, for each host mapping of the Linux type select the host and perform a Change Host Operating System. Select the new "Linux" host type. ======================================================================= 2.0 Installation and Setup Instructions ---------------------------------------------------- Note: The web-download Storage Manager version 11.20 host software package must be first unpacked (tar -xvz) into a user-defined directory. Then, go to this directory, locate the Linux directory to access the Storage Manager host software installation file(s). 2.1 Step-by-step instructions for this code update are as follows ------------------------------------------------------------------------------- 1. If needed, install and update the driver for the IBM DS4000 Host Bus Adapter (HBA) or any IBM supported HBAs. a. Install the hardware by using the instructions that come with the adapter. b. Install the IBM DS4000 Host Adapter driver by using the instructions provided in the readme file located in the HostAdapter directory on the installation CD or downloaded with the device driver source package from the IBM support web site. c. Install the IBM DS Linux RDAC driver by using the instructions provided in the readme file located in the LinuxRDAC directory on the installation CD or downloaded with the device driver source package from the IBM support website or configure the DM-MP multipath drivers using the instructions in the IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide or the DM-MP multipath drive information that is shipped with your Linux kernel publications. 2. If there is a previous version 7.x, 8.x or 9.xx of the IBM DS Storage Manager host software (ie. SMRuntime, SMClient, RDAC, SMUtil and SMAgent packages) installed in the server, you have to uninstall it first before installing the new version of the storage manager host software. 3. Install the new Storage Manager host software version from the host software package that you downloaded from the IBM Support Web site. Note: The Storage Manager installer might take up to 5 minutes to display the end user license for machine code (EULA) depending on the Operating System versions. Refer to the IBM System Storage DS Storage Manager version 11.20 Installation and Host Support Guide or IBM System Storage DS Storage Manager version 10 Installation and Host Support Guide for detailed installation instructions. CAUTION: Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the DS4000/DS5000 drive expansion enclosure ESM, the DS4000/DS5000 storage server controller and the hard drive firmware. Refer to the http://www.ibm.com/support/, IBM System Storage?Disk Storage Systems Technical Support web site, for the the latest DS Storage Manager host software, the DS4000/DS5000 controllers firmware, the drive expansion enclosure ESM firmware and the hard disk drive code. 2.2.1 Installation of SLES11 SP1 kernel pactches to support ALUA failover method with storage subsystems having controller firmware 7.83.xx.xx and higher installed ================================================================ Please follow the steps below to install the packages. a) Install Operating System from the distribution provided media. b) The rpm package patches are placed in the LinuxALUApatch directory, located in the web download .tgz code package or host kit DVD media. Install the kparts rpm package using the command below. rpm -Uvh kpartx-0.4.8-40.21.1.1.09.00.0000.000..rpm c) Install multipath-tools rpm package using the command below. rpm -Uvh multipath-tools-0.4.8-40.21.1.1.09.00.0000.000..rpm d) Setup /etc/multipath.conf as required. (In case you want to configure multipath.conf copy, the sample file /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/ directory and rename the file to multipath.conf. Update the vendor/product information and modify the file to set the parameters based on your LinuxALUA patch requirement. Please refer to the IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide for more info) e) Enable multipathd service using command below. chkconfig multipathd on f) Edit the file /etc/sysconfig/kernel. Add scsi_dh_rdac in INITRD_MODULES list. g) Install scsi_dh_rdac module using the command below. rpm -ivh scsi_dh_rdac-kmp-default-00.00.0000.000_2.6.32.12_0.7-sles11.1..rpm h) Reboot the system for changes to take effect. NOTE: Replace with appropriete architecture host is running on. arch can be i586(or i686), x86_64 or ppc64. stands for build versions. It can change whenever there is a bugfix. Use the --force or --nodeps option if rpm install fails with with dependency checks. 2.2.2 Installation of Redhat 6 U1 kernel pactches to support ALUA failover method with storage subsystems having controller firmware 7.83.xx.xx and higher installed ================================================================ Please follow the steps below to install the packages. a) Install Operating System from the distribution provided media. b) The rpm package patches are placed in the LinuxALUApatch directory, located in the web download .tgz code package or host kit DVD media. Install the kparts rpm package using the command below. rpm -Uvh kpartx-0.4.9-41.1.el6.09.00.0000.000..rpm c) Install multipath-tools library rpm using the command below. rpm -ivh device-mapper-multipath-libs-0.4.9-41.1.el6.09.00.0000.000..rpm d) Install multipath-tools rpm package. rpm -ivh device-mapper-multipath-0.4.9-41.1.el6.09.00.0000.000..rpm e) Setup /etc/multipath.conf as required. (In case you want to configure multipath.conf, copy the sample file /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf to /etc/ directory. Update the vendor/product information and modify the file to set the parameters based on your requirement. Please refer to the IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide for more info) f) Enable multipathd service using the command below. chkconfig multipathd on g) Install scsi_dh_rdac module. rpm -ivh scsi_dh_rdac-kmod-09.00.0000.000-el6..rpm h) Reboot the system to take effect. NOTE: Replace with appropriete architecture host is running on. arch can be i686, x86_64 or ppc64. stands for build versions. It can change whenever there is a bugfix. Use the --force or --nodeps option if rpm install fails with with dependency checks. ======================================================================= 2.2 Helpful Hints ----------------------- 1. The DS4500 and DS4300 storage subsystem have updated recommended drive-side cabling instructions. The DS4500 instructions are documented in the IBM System Storage DS4500 Installation, Users, and Maintenance Guide (GC27-2051-00 or IBM P/N 42D3302). The DS4300 instructions are documented in the IBM System Storage DS4300 Installation, Users, and Maintenance Guide (GC26-7722-02 or IBM P/N 42D3300). Please follow the cabling instructions in these publications to cable the new DS4500 and DS4300 setup. If you have an existing DS4500 setup with four drive side minuhub installed that was cabled according to the previously recommended cabling instructions, please schedule down time as soon as possible to make changes to the drive side FC cabling. Refer to the IBM System Storage DS4500 and DS4300 Installation, Users, and Maintenance Guide for more information. 2. The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously, some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes. 3. IBM recommends at least one hot spare per EXPxxxx drive expansion enclosure. 4. If you are unable to see the maximum number of drives during Automatic Configuration, you should be able to use Manual Configuration to select individual drives and select the maximum number of drives allowed. 5. The DS4000/DS5000 controller host ports or the Fibre Channel HBA ports can not be connected to a Cisco FC switch ports with "trunking" enabled. You might encounter failover and failback problems if you do not change the Cisco FC switch port to "non-trunking" using the following procedure: a. Launch the Cicso FC switch Device Manager GUI. b. Select one or more ports by a single click. c. Right click the port(s) and select Configure, a new window pops up d. Select the "Trunk Config" tab from this window, a new window opens e. In this window under Admin, select the "non-trunk" radio button, it is set to auto by default. f. Refresh the entire fabric. 6. Serial connections to the IBM DS storage controller must be set to a baud rate of either 38400 or 57600. Do not make any connections to the IBM DS storage subsystem serial ports unless instructed by IBM Support. Incorrect use of the serial port might result in loss of configuration and/or loss of data. 7. Starting with the IBM DS Storage Manager (SM) host software version 9.12 or later, the Storage Manager client script window looks for the files with the file type of ".script" as the possible script command files. In the previous versions of the IBM DS Storage Manager host software, the script window looks for the file type ".scr" instead. (i.e. enableAVT.script for SM 9.12 or later vs. enableAVT.scr for pre-SM 9.12) 8. Do not delete the Access LUN or Access Volume if you want to manage the IBM DS storage subsystem in-band (host-agent managed). The Access LUN is required by the SMClient to communicate with the storage controllers when using the in-band management method. 9. Fabric topology zoning requirement with AIX fcp_array (RDAC) and Solaris RDAC only. To avoid possible problem at the host level, it is best practice that all Fibre Channel (FC) Switches must be zoned such that a single FC host bus adapter can only access one controller per storage array. In addition, this zoning requirement also ensures the maximum number of host connections can be seen and log into the controller FC host port. This is because if a FC HBA port is seen by both controller A and B host ports, it will be counted as two host connections to the storage subsystem - one for controller A port and one for controller B port. Note: The DS4000 storage subsystems DS4500, DS4400 and FAStT500 (IBM machine type 1742 and 3552) have two ports per controller - one per minihub slot. The DS4000 storage subsystems DS4300 (IBM machine type 1722) and DS4100 (IBM machine type 1724) have two ports per controller. The DS4000 storage subsystem FAStT200 (IBM machine type 3542) has only one port per controller. The DS4700 (IBM machine type 1814) has up to 4 ports per controller. The DS4800 (IBM machine type 1815) has 4 ports per controller. The DS4200 (IBM machine type 1814) has 2 ports per controller. 10. All enclosures (including DS4000 storage subsystem with internal drive slots) on any given drive loop/channel should have complete unique ID's, especially the single digit (x1) portion of the ID, assigned to them. For example, in a maximum configured DS4500 storage subsystem, enclosures on one redundant drive loop should be assigned with id's 10-17 and enclosures on the second drive loop should be assigned with id's 20-27. Enclosure id's with the same single digit such as 11, 21 and 31 should not be used on the same drive loop/channel. In addition, for enclosures with mechanical enclosure ID switch like DS4300 storage subsystems, EXP100 or EXP710 storage expansion enclosures, do not use enclosure ID value of 0. The reason is with the physical design and movement of the mechanical enclosure ID switch, it is possible to leave the switch in a “dead zone?between ID numbers, which return an incorrect enclosure ID to the storage management software. The most commonly returned enclosure ID is 0 (zero). In addition to causing the subsystem management software to report incorrect enclosure ID, this behavior also result in enclosure ID conflict error with the storage expansion enclosure or IBM DS storage subsystem intentionally set the ID to 0. The DS4200 and DS4700 storage subsystems and the EXP420 and EXP810 storage expansion enclosures do not have mechanical ID switches. Thus, they are not susceptible to this problem. In addition, these storage subsystems and storage expansion enclosures automatically set the Enclosure IDs. IBM recommendation is not make any changes to these settings unless the automatic enclosure ID settings resulting in non- unique single digit settings for enclosures (including the storage subsystems with internal drive slots) in a given drive loop/channel. ======================================================================= 3.0 Configuration Information ---------------------------------------- 3.1 Configuration Settings ------------------------------------ 1. When using the Linux RDAC as the multipath driver, the host type can be set to either LNXAVT, LNXCluster(Linux Cluster) or LINUX host type. LNXAVT (the old Linux host type) has AVT enabled. This is used for older multipath drivers that were not able to transfer LUNs on these storage systems. The current MPP/RDAC and Linux DM-MP drivers do not need this. The new Linux host type has AVT disabled and is appropriate for the MPP/RDAC and DM-MP multipath drivers. LNXCluster has AVT disabled and also provides settings for cluster solutions. 2. By default, the IBM DS Storage Manager Client does not automatically map logical drives when the IBM DS4000/DS5000 storage partitioning premium feature is enabled. This means that the logical drives after being created are not automatically presented to the host systems. a. For a new installation, after creating new arrays and logical drives, if your host OS type is any of the supported version of Linux OS's, create a partition with the host type of Linux and map the logical drives to this partition. b. If you are upgrading the NVSRAM with Storage Partitions, you may have to change the default host type to match the host system OS like LINUX or LNXCL(LNXCLVMWARE). After upgrading the NVSRAM, the default host type is reset to Windows 2000/Server 2003 non-clustered for DS4000/DS5000 storage subsystem with controller firmware version 06.15.xx.xx or later. For DS4000 storage server with controller firmware version 06.12.xx.xx or earlier, it is reset to Windows non- clustered (SP5 or higher), instead. Refer to the IBM DS Storage Manager online help to learn more about creating storage partitions and changing host types. 3. Running script files for specific configurations. Apply the appropriate scripts to your subsystem based on the instructions you have read in the publications or any instructions in the operating system readme file. A description of each script is shown below. - SameWWN.script: Setup RAID controllers to have the same World Wide Names. The World Wide Names (node) will be the same for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - DifferentWWN.script: Setup RAID controllers to have different World Wide Names. The World Wide Names (node) will be different for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - EnableAVT_Linux.script: (Do not use this script when the installed storage subsystem controller firmware version is of 7.83.xx.xx and later) The script will enable automatic logical drive transfer (ADT) for the Linux heterogenous host region. Do not use this script unless it is specifically mentioned in the applicable instructions. (This script can be used for other host type if modifications are made in the script, replacing the Linux host type with the appropriate host type that needs to have AVT/ADT enabled) - DisableAVT_Linux.script: The script will disable the automatic logical drive transfer (ADT) for the Linux heterogenous host region. Do not use this script unless it is specifically mentioned in the applicable instructions. (This script can be used for other host type if modifications are made in the script, replacing the Linux host type with the appropriate host type that needs to have AVT/ADT disabled) - EnableCntlReset.script: This script is to set the HostNVSRAMByte offset 0x1B of the LnxclVMWare host type NVSRAM region to 1 for enabling the propagation of the bus/target/LUN reset that one controller received to the other controller in a dual controller DS4K subsystem. - DisableCntlReset.script: This script is to set the HostNVSRAMByte offset 0x1B of the LnxclVMWare host type NVSRAM region to 0 for disabling the propagation of the bus/target/LUN reset that one controller received to the other controller in a dual controller DS4K subsystem. The default propagation of the bus/target/LUN reset setting for the LnxclVMWare host type is to enable it. This script should be used only when the following conditions are met. . The IBM DS storage subsystem did not have any LUNs mapped to any Linux hosts that are part of Linux cluster configuration. . The IBM DS storage subsystem did not have any LUNs mapped to any VMWare hosts. . The Linux hosts use Linux RDAC as multipath driver. . The host type of defined host ports is LnxclVMWare. Inappropriate use of this script might cause loss of access to the mapped LUNs from the Linux or VMware hosts. 4. If a host in a cluster server configuration lost a physical path to a DS4000/DS5000 storage subsystem controller, the logical drives that are mapped to the cluster group will periodically failover and then failback between cluster nodes until the failed path is restored. This behavior is the result of the automatic logical drive failback feature of the RDAC multipath driver. The cluster node with a failed path to a DS4000/DS5000 controller will issue a failover command of all logical drives that were mapped to the cluster group to the controller that it can access. After a programmed interval, the nodes that did not have failed path will issue failback command for the logical drives because they can access the logical drives both controllers, resulting in the cluster node with the failed path not be able to access certain logical drives. This cluster node will then issue a failover command for all logical drives, repeating the logical drives failover-failback cycle. The workaround is to disable this automatic failback feature. For Linux cluster (SteelEye) environments, perform the following steps: a. Open the /etc/mpp.conf file, and change the DisableLunRebalance parameter to 3. Save the changes. b. From a shell prompt, type the following command, and press Enter: mppUpdate c. Reboot the computer for the changes to take effect. d. Repeat this procedure on any system in the cluster configuration that has the Linux RDAC driver installed. ======================================================================= 3.2 Hardware status and information -------------------------------------------------- For more information, refer to the IBM System Storage Disk Storage Systems Technical Support web site. ======================================================================= 3.3 Unsupported configurations -------------------------------------------- The configurations that are currently not being supported with IBM DS Storage Manager Version 10.83 or later are listed below: 1. The IBM EXP395 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS3950. EXP810 drive enclosures are also supported in the DS3950 with the purchase of a premium feature key. 2. The IBM EXP520 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5020. EXP810 drive enclosures are also supported in the DS5020 with the purchase of a premium feature key. 3. The IBM EXP5000 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5100 and DS5300. 4. The DS4100 (machine type 1724-all models) storage subsystem does not support the attachment of the DS4000 EXP710, EXP700 and EXP500 (FC) drive expansion enclosure. 5. The DS4800 storage subsystem (machine type 1815-all models) does not support the attachment of the FAStT EXP500 and DS4000 EXP700 drive expansion enclosures. 6. The DS4200 (machine type 1814 - models 7VA/H) does not support the attachment of the DS4000 EXP100 (SATA), EXP710 (FC) and EXP810 (SATA and FC) drive expansion enclosures. In addition, it does not support Fibre Channel disk drive options. 7. The IBM DS4000 EXP420 Expansion Enclosure is not supported attached to any other IBM DS4000 Storage Subsystems except the DS4200. 8. The DS4100 with Single Controller option does not support the attachment of the DS4000 storage expansion enclosures. 9. The DS5100 and DS5300 storage subsystems do not support the attachment of the DS4000 EXP100, EXP700, EXP710 drive expansion enclosures. The EXP810 is only supported through an RPQ process. 10. The DS5000 EXP5000 drive expansion enclosure is supported attached to the DS5100 and DS5300 only. 11. The DS4700 and DS4800 storage subsystems do not support the attachment of the DS4000 EXP700 drive expansion enclosures. The EXP700 enclosure must be upgraded into DS4000 EXP710 enclosure using the DS4000 EXP700 Models 1RU/1RX Switched-ESM Option Upgrade Kit before it can be attached to the DS4700 and DS4800 storage subsystems. 12. The DS4300 storage subsystem with Single Controller option does not support the controller firmware version 06.xx.xx.xx. The correct firmware version for these DS4300 storage subsystem models is 05.34.xx.xx. 13. Fibre Channel loop environments with the IBM Fibre Channel Hub, machine type 3523 and 3534, in conjunction with the IBM Fibre Channel Switch, machine types 2109-S16, 2109-F16 or 2109-S8. In this configuration, the hub is connected between the switch and the IBM Fibre Channel RAID Controllers. 14. The IBM Fibre Channel hub, machine type 3523, connected to IBM machine type 1722, 1724, 1742, 1814, 1815, 3542 and 3552. 15. A configuration in which a server with only one FC/SAS/iSCSI host bus adapter connects directly to any IBM DS storage subsystem with dual controllers is not supported. The supported configuration is the one in which the server with only one FC/SAS/iSCSI host bus adapter connects to both controller ports of any IBM DS storage subsystem with dual controllers via FC/SAS/Ethernet switch (SAN-attached configuration.) ======================================================================= 4.0 Unattended Mode ------------------------------- N/A ======================================================================= 5.0 WEB Sites and Support Phone Number -------------------------------------------------------------- 5.1 IBM System Storage?Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 5.2 IBM System Storage?Marketing Web Site: http://www.ibm.com/systems/storage/ 5.3 IBM System Storage?Interoperation Center (SSIC) web site: http://www.ibm.com/systems/support/storage/ssic/ 5.4 You can receive hardware service through IBM Services or through your IBM reseller, if your reseller is authorized by IBM to provide warranty service. See http://www.ibm.com/planetwide/ for support telephone numbers, or in the U.S. and Canada, call 1-800-IBM-SERV (1-800-426- 7378). IMPORTANT: You should download the latest version of the DS Storage Manager host software, the DS3000/DS4000/DS5000 storage subsystem controller firmware, the DS3000/DS4000/DS5000 drive expansion enclosure ESM firmware and the drive firmware at the time of the initial installation and when product updates become available. For more information about how to register for support notifications, see the following IBM Support Web page: ftp.software.ibm.com/systems/support/tools/mynotifications/overview.pdf You can also check the Stay Informed section of the IBM Disk Support Web site, at the following address: www.ibm.com/support/ ======================================================================= 6.0 Trademarks and Notices -------------------------- The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM DS3000 DS3500 DCS3700 DCS3860 DS4000 DS5000 FAStT System Storage the e-business logo xSeries pSeries HelpCenter UNIX is a registered of The Open Group in the United States and other countries. Microsoft, Windows, and Windows NT are of Microsoft Corporation in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. QLogic and SANsurfer are trademarks of QLogic Corporation. Other company, product, or service names may be trademarks or service marks of others. ======================================================================= 7.0 Disclaimer -------------- 7.1 THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. 7.2 Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corporation.