IBM System Storage DS Storage Manager version 10.86.x5.43 for AIX. Note: AIX host attachment to the IBM 1722-all models (DS4300), 1724-all models (DS4100), 1742-models 90U/90X (DS4500), 1746-all models (DS3500), 1814-all models (DS3950, DS4200, DS4700 and DS5020), 1815-all models (DS4800) and 1818- all models (DCS3700, DS5300 and DS5100) requires additional purchase of an IBM AIX Host Kit Option. The IBM AIX Host Kit options contain the required IBM licensing to attach an AIX Host System to the DCS3700, DS3500, DS3950, DS4100, DS4200, DS4300, DS4500, DS4700, DS4800, DS5020, DS5100 or DS5300. Please contact your IBM service representative or IBM resellers for purchasing information. Important: A problem causing recursive reboots exists while using 7.36.08 and 7.36.12 firmware on IBM System Storage DS4000 or DS5000 systems. This problem is fixed in 7.36.14.xx and above firmware. All subsystems currently using 7.36.08 and 7.36.12 firmware MUST run a file system check tool (DbFix) before and after the firmware upgrade to 7.36.14.xx or higher. Instructions for obtaining and using DbFix are contained in the 7.36.14.xx or higher firmware package. Carefully read the firmware readme and the DbFix instructions before upgrading to firmware 7.36.14.xx or higher. For subsystems with firmware level 7.36.08 or 7.36.12, configuration changes should be avoided until a firmware upgrade to 7.36.14.xx or higher has been completed successfully. Subsystems not currently using 7.36.08 or 7.36.12 do not need to run DbFix prior to upgrading to 7.36.14.xx or higher. DbFix may be run after upgrading to 7.36.14.xx or higher, but it is not required. DbFix is only applicable to subsystems using 7.36.xx.xx or greater firmware. If problems are experienced using DbFix or the resulting message received is Check Failed, DO NOT upgrade your firmware and contact IBM support before taking any further actions. (C) Copyright International Business Machines Corporation 1999, 2013. All rights reserved. US Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Note: Before using this information and the product it supports, read the general information in section 6.0 "Trademarks and Notices"in this document. Last Update: 12/02/2013 IMPORTANT: This storage manager software package contains non-IBM code (Open Source code.) Please review and agree to the Non-IBM Licenses and Notices terms stated in the DS_Storage_Manager_Non_IBM_Licenses_and_Notices_v3.pdf file before use. Refer to the IBM System Storage Support Web Site or CD for the IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide or the IBM System Storage DS Storage Manager version 10 Storage Manager Installation and Host Support Guide for more information. This guide along with the Storage Manager program Online Help provide the installation and support information. This guide also provides information on other IBM DS storage subsystems related publications. Please refer to corresponding Change History document for more information on new features and modifications. ============================================================================== CONTENTS -------- 1.0 Overview 2.0 Installation and Setup Instructions 3.0 Configuration Information and Usage Notes - AIX Platforms 4.0 Unattended Mode 5.0 Web Sites and Support Phone Number 6.0 Trademarks and Notices 7.0 Disclaimer ============================================================================== 1.0 Overview ------------ 1.1 Overview ------------ The 10.86 version of the IBM DS Storage Manager host software for AIX is required for managing all DS3500 and DCS3700, storage models with ontroller firmware version 07.86.xx.xx or higher. In addition, it is also recommended for managing models with controller firmware version 6.5x.xx.xx or higher installed. Important: Please refer to the "IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide" located at http://www.ibm.com/support/ for all installation and support notes pertaining to AIX. Notes: 1. The IBM Storage Manager host software version 10.8x new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. 2. The latest version of the IBM System Storage DS Storage Manager Version 10.8 Installation and Host Support Guide is also available on IBM's Support web site as a downloadable Portable Document format (PDF) file. 3. Please refer to the IBM System Storage Interoperation Center located at http://www-03.ibm.com/systems/support/storage/config/ssic for information pertaining to supported AIX kernels, HBAs, and multipath drivers that are supported. 4. After upgrading Storage Manager to 10.xx, the default owner and group would be changed as follows. directory owner group /usr; /var bin bin directory owner group /usr; /var root system Products Supported ----------------------------------------------------------------- | New Model | Old Model | Machine Type | Model | |----------- |-----------|--------------|-------------------------| | DS5300 | N/A | 1818 | 53A | |----------- |-----------|--------------|-------------------------| | DS5100 | N/A | 1818 | 51A | |----------- |-----------|--------------|-------------------------| | DS5020 | N/A | 1814 | 20A | |----------- |-----------|--------------|-------------------------| | DS4800 | N/A | 1815 | 82A, 82H, 84A, 84H, | | | | | 88A, 88H, 80A, 80H | |----------- |-----------|--------------|-------------------------| | DS4700 | N/A | 1814 | 70A, 70H, 72A, 72H, | | | | | 70T, 70S, 72T, 72S, | |----------- |-----------|--------------|-------------------------| | DS4500 | FAStT 900 | 1742 | 90X, 90U | |----------- |-----------|--------------|-------------------------| | DS4400 | FAStT 700 | 1742 | 1RX, 1RU | |----------- |-----------|--------------|-------------------------| | DS4300 | FAStT 600 | 1722 | 60X, 60U, 60J, 60K | | | | | 60L | |----------- |-----------|--------------|-------------------------| | DS4200 | N/A | 1814 | 7VA, 7VH | |----------- |-----------|--------------|-------------------------| | DS4100 | FAStT 100 | 1724 | 100, 1SC | |----------- |-----------|--------------|-------------------------| | DS3950 | N/A | 1814 | 94H, 98H, | |------------|-----------|--------------|-------------------------| | DCS3700 | N/A | 1818 | 80C | |----------- |-----------|--------------|-------------------------| | DS3500 | N/A | 1746 | C2A, A2S, A2D, C4A, | | | | | A4S, A4D | |----------- |-----------|--------------|-------------------------| | DS3200 | N/A | 1726 | 21X, 22X, 22T, HC2, HC6 | |----------- |-----------|--------------|-------------------------| | DS3300 | N/A | 1726 | 31X, 32X, 32T, HC3, HC7 | |----------- |-----------|--------------|-------------------------| | DS3400 | N/A | 1726 | 41X, 42X, 42T, HC4, HC8 | ----------------------------------------------------------------- ATTENTION: 1. AIX does not support host I/O with the single controller version of the DS4300 (DS4300 SCU) subsystem. The DS4300 SCU must be upgraded to standard (dual-controller) model or Turbo (dual controller) option model for AIX host attachment support. 2. For the DS4400 and the DS4100 storage subsystems - all models (Standard /dual controller and Single Controller), the controller firmware version 06.12.xx.xx and later must be used. 3. The DS4300 with Single Controller option(M/T 1722-6LU,6LX,and 6LJ), FAStT200 (M/T 3542-all models) and FAStT500 (M/T 3552-all models) storage subsystems can no longer be managed by DS Storage Manager version 10.50.xx.23 and higher. 4. For the DS3x00 storage subsystems, please refer to the readme files that are posted in the IBM DS3000 System Storage support web site for the latest information about their usage, limitations or configurations. http://www.ibm.com/systems/support/storage/disk ======================================================================= 1.2 Limitations (not all are AIX limitations, but DS Storage Manager limitations as well) ----------------------------------------------------------- IMPORTANT: The listed limitations are cumulative. However, they are listed by DS storage subsystem controller firmware and Storage Manager host software releases to indicate which controller firmware and Storage Manager host software release that they were first seen and documented. No new limitations with Storage Manager version 10.86.x5.43 release (controller firmware 07.86.xx.xx) No new limitations with Storage Manager version 10.86.xx05.0035 release (controller firmware 07.86.xx.xx) New limitations with Storage Manager version 10.86.xx05.0028 release (controller firmware 07.86.xx.xx) 1. When using OLH with JAWS screen reader, will have difficulty on navigating through the content in the Index Tab under Help content window due to incorrect and duplicate reading of the text. Please use the search/find tab in the OLH. (LSIP200331090) 2. When using the accessibility software JAWS 11 or 13, may hear the screen reading of a background window, even if the dialog is not in focus. Please use the INSERT+B key to get the reading reinitiated for the dialog in focus.(LSIP200329868) 3. Will not be able to find Tray tab on storage array profile dialog launch. User needs to navigate to Hardware tab to find Tray tab.(LSIP200332950) 4. Will not be able to perform multiple array upgrades having different firmware versions. Please choose to upgrade arrays having different firmware versions separately.(LSIP200335962) 5. May hit IO error if all the paths are lost due delayed uevent, Please always make sure to run 'multipath -v0' command to rediscover the returning path. This will prevent the host from encountering an IO error due to the host losing all paths should the alternate path fails. (LSIP200347725) 6. There may be some confusion when the data is compared between summary tab and storage array profile. There is no actual functionality issue. The way the contents are labelled is not consistent.(LSIP200354832) New limitations with Storage Manager version 10.84.xx.30 release (controller firmware 07.84.xx.xx). 1. On a more than 30 drive disk pool with T10-PI enabled, the reconstruction progress indicator never moves because a finish notification is not sent to Storage Manager. When seeing this issue, just putting the replacement drive in the storage subsystem will allow the reconstruction to resume. (LSIP200298553) No new limitations with Storage Manager version 10.83.xx.23 release (controller firmware 07.83.xx.xx). New limitations with Storage Manager version 10.83.xx.18 release (controller firmware 07.83.xx.xx). 1. A non-T10PI logical drive can not be "volumecopied" to a logical drive with T10 PI functionality enabled. A non-T10 PI logical drive can only be volumeCopied to a non-T10 PI logical drive.(LSIP200263988) 2. Initiating dynamic logical drive expansion (DVE) on logical drives that are part of an asynchronous enhanced remote mirroring (ERM) without write order consistency will result in an error that is misleading because the controller is sending incorrect return code. DVE can only perform on logical drives having asynchronous ERM with write order consistency or synchronous ERM relationship.(LSIP200287980) 3. Cache settings can not be updated on thin logical drives. They can only be updated on the repository drives that are associated with these thin logical drives.(LSIP200288041) 4. Wait at least two minutes after canceling a logical drive creation in the disk pool before deleting any logical drives just created in the disk pool from the cancelled logical drive creation process. (LSIP200294588) 5. T10PI errors were incorrectly reported in the MEL log for T10PI- enabled logical drives that are participated in an enhanced remote mirroring relationship during the logical drive initializations.(LSIP200296754) 6. Having more than 20 SSD drives in a storage subsystem will result in SSD premium feature "Out-of-Compliance" error. One has to remove the extra SSD drives to bring the total number of SSDs in the storage subsystem to 20 or less.(LSIP200165276) 7. Pressing shift + F10 does not display the shortcut or context menu for the active object in the Storage Manger host software windows. The work-around is to use right-click on the mouse or the Windows key on the keyboard. (LSIP200244269) 8. Storage Manager subsystem management window performance might be slow due to Java runtime memory leak. The work around is to close the Storage Manager client program after the management tasks are completed. (LISP200198341) New limitations with Storage Manager version 10.77.xx.28 release (controller firmware 07.77.xx.xx). 1. Only one EXP5060 drive slot with a 3 TB SATA drive inserted can have the ATA translator firmware updated at any one time if the inserted 3 TB drive has "Incompatible" status. Simultaneously updating ATA translator firmware on multiple 3 TB drives having an "Incompatible" status might result in an "interrupted" firmware download state that requires the power-cycling of the subsystem to clear it. 2. The 3 TB NL SAS drive can only be used to create non-T10PI arrays and logical drives. However, in certain conditions, where there are not any available hot-spare drives for T10PI enabled arrays, it will be used by the controller as a hot-spare for a failed drive in these T10PI enabled arrays. The controller will operate properly in this scenario and there are no adverse affects to the system while the 3 TB NL SAS drive is being used as the hot spare in a T10PI enabled array. New limitations with Storage Manager version 10.77.xx.16 release (controller firmware 07.77.xx.xx). 1. When you switch between the Search tab and the TOC tab, the topic that was open in the Search tab is still open in the TOC tab. Any search terms that were highlighted in the Search tab are still highlighted in the TOC tab. This always happens when you switch between the Search tab and the TOC tab. Workaround: Select another topic in the TOC to remove all search term highlighting from all the topics. 2. Support tab in the AMW kept open for longer duration will result in display of unevenly spaced horizontal marks. If the storage Manager is kept open for long durations a few grey lines may be seen on the Support Tab after restoring the AMW. Re-launching the AMW eliminates this problem. 3. Kernel panic while loading RedHat 6.0 on Power PC installation. When trying to load the RHEL6 PPC installation to the system that has existing old OS(LPAR), the system panics. Workaround: For a rack system with an HMC, delete the LPAR and recreate a new one for the installation to work. For a rack system without an HMC and the Power Blade system, the user needs to go to Open Firmware Prompt and type in “dev nvram?and “wipe- nvram? The system will reboot afterward and need to process normal RHEL6 installation. 4. Node unfencing can fail during RHCS startup when automatically generated host keys are used. The cluster manager service (cman) will fail to start, and the user will see the error message “key cannot be zero?in the host log. A user is somewhat likely to hit the issue if the cluster.conf file uses does not have host keys defined manually. If the user forgoes use of SCSI reservation fencing altogether, and relies only on power fencing, they will not experience this issue. Bugzilla 653504 was submitted to RedHat for this issue. 5. RHCS services with GFS2 mounts cannot transfer between nodes when client mounts with NFSv4. When attempting to transfer a cluster service manually, while a client was connected using NFS version 4, the GFS2 mount points failed to unmount. This caused the service to go to the “failed? state. The mount point, along with any other mount points exported from the same virtual IP address, will become inaccessible. Not likely to happen if the user configures the cluster nodes not to allow mount requests from NFS version 4 clients. Red Hat Bugzilla #654333 was opened to discover the root cause and fix the problem 6. The storage subsystem configuration script contains the logical drive creation commands with incorrect T10PI parameter. The workaround is to manually edit the file to change instances of dataAssurance parameters to T10PI. 7. The DS3500 subsysem does not support External Key Management at this time. Please contact IBM resellers or Representatives for such support in the future. New limitations with Storage Manager version 10.70.xx.25 release (controller firmware 07.70.xx.xx). 1. Kernel Panic reported on Linux SLES10 SP3 during controller failover. Probability of this issue happening in the field is low. Workaround: Avoid placing controllers online/offline frequently. Novell was informed about this issue. Opened Novell Bugzilla NVBZ589196. 2. CHECK CONDITION b/4e/0 returned during ESM download on DS5000 The interposer running LP1160 firmware is reporting an Overlapped Command (0B/4E/00) for a Read Command containing an OXID that was very recently used for a previous read command on the same loop to the same ALPA. In almost all cases, this command will be re-driven successfully by the controller. Impact should be negligible. Workaround: Can be avoided by halting Volume I/O prior to downloading ESM firmware. 3. If a host sees RAID volumes from the same RAID module that are discovered through different interface protocols (fibre/SAS/iSCSI) failovers will not occur properly and hosts IOs will error out. If the user does not map volumes to a host that can be seen through different host interfaces this problem will not occur. Workaround: Place controller online and reboot the server to see all volumes again. 4. Linux guest OS reported I/O error during Controller Firmware Upgrade on ESX4.1 with Qlogic HBA. The user will see I/O errors during controller activation from the guest OS due to mapped devices becoming inaccessible and offline. Workaround: This issue occurs on various Linux guest OSes, so to avoid the issue the user should perform an offline (no I/O to controllers) controller firmware upgrade. 5. I/O Error on Linux RH 4.8 after rebooting controller on DS3500 iSCSI. The devices will be disconnected from the host until the iSCSI sessions are re-established. workaround: Restart the iSCSI service. Configure the iSCSI service to start automatically on boot. 6. CFW Upgrade on 4.1 w/SLES 11 fails with IO error on SLES 11 Guest partition. This issue occurred if there is filesystem volume in SLES11 VM. User will see I/O errors and Filesystem volumes in SLES11 VMs will be changed to read-only mode. Workaround: User can perform controller FW upgrade with either no I/O running on SLES11 VM or no Filesystem created in SLES11 VMs. 7. "gnome-main-menu" crashes unexpectedly while install HSW on SLES 10. The crashes appear to happen randomly with other applications as well.? After the crash the menu reloads automatically.?Dismiss the prompt and the host will reload the application.?This problem appears to be a vendor issue. 8. I/O errors on RHEL 4.8 guests on VMware 4.1 during controller reset. VMware has suggested VMware 4.1 P01 may resolve this issue. No support was issued for RHEL 4.8 guests under VMware 4.1 over SAS. Workaround: Use RHEL 5.5. 9. VMWare guest OS not accessible on iSCSI DS3524 (VMware SR 1544798051, VMware PR 582256). VMware has suggested VMware 4.1 P01 may resolve this issue. Workaround: Use Fibre Channel or SAS connectivity. 10. When DMMP is running in a BladeCenter SAS environment, I/O errors occur during failover and controller firmware operations. Support for Device Mapper and SLES 11.1 SAS has been restricted and will not be published. Workaround: Install RDAC. 11. SLES11.1ppc SAN boot fails on PS700 with 10 Gb Qlogic ethernet to DS3512. After configuration and install of Linux on a LUN using software iSCSI, the JS blade will not boot into the OS. Workaround: Use local boot, SAS or Fibre Channel. 12. ‘No response?messages from Device Mapper devices with volumes on non-preferred path. Any I/O operations against volumes mapped to failed devices will timeout or hang. Workaround: This problem requires the host to be rebooted in order to restart I/O successfully. Update SLES11.1 to maintenance update 20101008 containing kernel version 2.6.32.23-0.3.1. Note that this kernel version has not been fully certified by LSI and should only be used if this issue is encountered. Bugzilla #650593 contains issue details and fix provided by Novell. 13. LifeKeeper 7.2.0 recovery kits require multiple host ports to use SCSI reservations. Workaround: Use two single port SAS HBAs. In this case each port will be represented as a host, and LifeKeeper will identify both separately. Another way to avoid the issue is to use MPP as the failover driver. 14. Unexpected "jexec" messages during mpp installation/un-installation on SLES 11.1. Workaround: None - https://bugzilla.novell.com/show_bug.cgi?id=651156 15. I/O write errors on mounted LifeKeeper volumes before node resources transfers to another node. recovery: I/O can be restarted as soon as node resources are transferred to another node in the cluster. Workaround: None. 16. Solaris 10u8 guest OS disks become inaccessible on ESX35u5 during sysReboot on array controller. This can be avoided by not using raw device mappings and instead using virtual disks. If raw device mappings are required, they must be used with virtual compatibility mode selected when adding the disks to the guest OS. Workaround: None. 17. Solaris 10u8 guest OSes reported I/O error during Controller Firmware Upgrade on ESX35U5 with Qlogic HBA. Permanent restriction. Workaround: Perform upgrade with no I/O. 18. Solaris guest OS reported I/O error during Controller reset on ESX35U5. The only recovery method is to rebooting the failed VM host. Permanent restriction. Please refer to the file LegacyStorageManagerLimitations-Aix.chg in the Storage Manager host software download package or in the !Readme CD directory for information on the legacy limitations that might still be applicable for your environment. ======================================================================= 1.3 Enhancements ---------------- The DS Storage Manager version 10.86.x5.43 host software in conjunction with controller firmware version 7.86.32.00 and higher - Provide the following new functions - Add support for DCS3860 (7.86.36.01) - Provide the fixes for the field defects as shown in the changelist file. For more information, please view the IBM System Storage DS Storage Manager Version 10.8 Installation and Host Support Guide and the IBM System Storage DS Storage Manager Version 10 Copy Services User's Guide. Note: Asymmetric Logical Unit Access (ALUA) failover method is not supported with AIX hosts at this time. Please contact IBM resellers or representatives for such support in the future. IMPORTANT: Do not use Storage Manager host software version 10.77.x5.xx and earlier to manage storage subsystems with 3 TB NL SAS and SATA drive option installed. 1.4 Fixes ----------------- New features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. 1.5 Level Recommendation and Prerequisites for the update ------------------------------------------------------------ Note: 1. The IBM Storage Manager host software version 10.86 for IBM AIX operating systems, new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. 2. IBM recommends upgrading to Storage Manager version 10.83.x5.18 or later because it does not include the IBM Support Monitor Profiler code. This code has a security vulnerability that might allow unwanted XSS and SQL injections. If upgrading to Storage Manager version 10.83.x5.18 or later is not possible, one should remove the IBM Support Monitor Profiler code from the management station immediately. The instruction to uninstall IBM Support Monitor Profiler can be found in the IBM System Storage DS Storage Manager Version 10.8 Installation and Host Support Guide. You can also verify your configuration with respect to interoperability at: http://www-03.ibm.com/systems/support/storage/config/ssic/ * AIX PTF/APARs can be downloaded from: http://www-912.ibm.com/eserver/support/fixes/fixcentral * Host Bus Adapter(s): http://www-03.ibm.com/systems/support/storage/config/ssic/ * pSeries and RS/6000 adapter code can be downloaded from: http://www-03.ibm.com/systems/support/storage/config/hba/index.wss AIX OS levels -------------- 1. 5.3TL12 SP5 2. 6.1TL07 SP4 3. 7.1TL01 SP4 4. VIOS 2.2.0.10-FP-24SP02 - Supported client OS: AIX6.1, AIX7.1, SLES11.1, SLES10.3, RH5.5, IBMi6.1 5. VIOS 2.2.1 and 2.2.1.4 - Supported client OS : AIX7.1, SLES11.1, SLES10.3 Refer to the IBM System Storage Interoperation Center (SSIC) web site - http://www.ibm.com/systems/support/storage/ssic/ for information on the latest supported AIX OS, switches and the HBA released code levels. Code levels at time of release are as follows ----------------------------------------------- The version of the host-software installer wizard for this release is SMIA-AIX-10.86.xx05.0035.bin. Note: The web-download Storage Manager version 10.86 host software package must be first unpacked (tar -xvz) into a user-defined directory. Then, go to this directory, locate the AIX directory to access the Storage Manager version host software installation file(s). Starting with the IBM DS storage manager version 9.12, all of the host software packages are included in a single Storage Manager host software installer wizard. During the execution of the wizard, the user will have a choice to install all or only certain software packages depending the need for a given server. There must be at least 900MB of free space in the /opt directory for the installer wizard to install the host software packages. * The DS3950/DS4000/DS5000 Storage Manager host software installer wizard requires the installation of a graphics adapter in the AIX server for it to run. If you wish to use the wizard but do not have a graphics adapter, you can execute the installer via sh -i console to run in console mode. In addition, there must be at least 300MB of free space in the /opt directory for the installer wizard to install the host software packages. For AIX servers without the graphics adapter, individual host software installation packages are provided in the Storage Manager Version for AIX OSes CD under the directory named "/AIX/Individual packages". These individual host software installation packages can also be downloaded at the IBM System Storage?Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk This installer wizard will install the following version of the host- software packages SMruntime: 10.86.0605.0002 SMclient: 10.86.0G05.0035 SMesm: 10.86.0G05.0007 SMagent: 10.02.0605.0000 SMutil: 10.01.0605.0000 1.6 Dependencies ------------------ ATTENTION: 1. The DS5020, DS3950, EXP5000, EXP810, EXP520, and EXP395 FC-SAS drives must have the FC-SAS interposer firmware version 2264 or later installed. Some drives with FC-SAS interposer firmware version earlier than version 2264 may report incorrect inquiry information during Start of Day. This causes the controller firmware to make the drive uncertified (incompatible), which in turn, causes the drive to no longer be accessible for I/Os. This condition will also cause the associated array to go offline. Since the array is no longer accessible, the controller firmware may not recover the cache for all LUNs under this array. This behavior may result in data not being written. If you believe that you have encountered this issue, please call IBM Support for assistance with the recovery actions. The FC-SAS interposer firmware version 2264 and later is available in the ESM/HDD firmware package version 1.78 or later. 2. The IBM System Storage DS Controller Firmware Upgrade Tool is required to upgrade any system from 6.xx controller firmware to the 7.xx.xx.xx controller firmware. This tool has been integrated into Enterprise Management Window of the DS Storage Manager v10.83 Client. 3. Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the IBM DS storage subsystem drive expansion enclosure ESM, the storage subsystem controller and the hard drive firmware. 4. Standard installation order for Storage Manager 10.86.xx.xx and controller firmware 07.86.xx.xx: 1. SMruntime - always first 2. SMesm - required by client 3. SMclient 4. SMagent 5. SMutil 6. Controller firmware and NVSRAM 7. ESM firmware 8. Drive firmware 5. The 3 TB SATA drive option for EXP5060 expansion enclosure requires ATA firmware version LW1613 and higher. The drive will be shown as "Incompatible" if it is installed in an EXP5060 drive slot with ATA translator firmware version older than LW1613. Please refer to the latest EXP5060 Installation, Users and Maintenance Guide for more information on working with the 3 TB SATA drive option. 6. The Storage Manager host software version 10.83.x5.18 or higher is required for managing storage subsystems with 3 TB NL FC-SAS drives. Storage manager version 10.83.x5.18 or higher in conjunction with controller firmware version 7.83.xx.xx and later allow the creation of T10PI-enabled arrays using 3 TB NL FC-SAS drives. IBM DS Storage Manager version 10.86 host software requires the DS3000 /DS4000/DS5000 storage subsystem controller firmware be at version 06.xx.xx.xx or higher. The IBM DS Storage Manager v9.60 supports storage subsystems with controller firmware version 04.xx.xx.xx up to 05.2x.xx.xx. The IBM DS Storage Manager v10.36 supports storage subsystems with controller firmware version 05.3x.xx.xx to 07.36.xx.xx. The IBM DS Storage Manager v10.70 supports storage subsystems with controller firmware version 05.4x.xx.xx to 07.70.xx.xx. ============================================================================== 2.0 Installation and Setup Instructions ------------------------------------------ The IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide is also available on IBM's Support web site as a downloadable Portable Document format (PDF) file. This guide along with the Storage Manager program Online Help provide the installation and support information. Note: The Storage Manager installer might take up to 5 minutes to display the end user license for machine code (EULA) depending on the Operating System versions. ======================================================================= 3.0 Configuration Information and Usage Notes - AIX Platforms --------------------------------------------------------------- * It is important to set the queue depth to a correct size for AIX hosts. Having too large of a queue depth can result in lost file systems and host panics. Please refer to the IBM System Storage DS Storage Manager version 10.8 Installation and Host Support Guide for details. * Disabling Cache mirroring while the write-cache is permitted, but not recommended for most environments and applications. Loss of data could occur in the event of a controller failure (data in cache, but not written to disk). Refer to the Installation and Support Guide for further information. * SCSI-3 Persistent Reservation is supported on AIX with DS3950/DS4000/DS5000 Storage Systems. * Each AIX host (server) can support 1 or 2 Partitions (or Host Groups), each with a maximum of 256 Logical Drives on AIX 5.2, 5.3 and 6.1. * For most applications, AIX host attaches to IBM DS storage subsystems using pairs of Fibre Channel adapters (HBA). When using fcp_array(RDAC) as the multipath driver: For each adapter pair, one HBA must be configured to connect to controller "A" and the other to controller "B". Each HBA pair must be configured to connect to a single partition in a IBM DS storage subsystem or multiple IBM DS storage subsystems (fanout). To attach an AIX host to a single or multiple IBM DS storage subsystems with two partitions, 2 HBA pairs must be used. Each HBA within a host must be configured in a separate zone from other HBAs within that same host, when connected to the same IBM DS storage subsystem controller port. In other words, only 1 HBA within a host can be configured in the same zone with given IBM DS storage subsystem controller port. * For most Direct-attach applications, the connections of IBM DS storage subsystems on AIX should be configured with two HBAs for complete path availability. As such, dual-path configurations are restricted to the following: DS5100/DS5300 - Up to eight servers configurations (2 - 16 HBAs) Each HBA pair must be connected to both A & B host-side controller ports. DS4200/DS4700/DS4800 - Up to four servers configurations (2, 4, 6 or 8 HBAs) Each HBA pair must be connected to both A & B host-side controller ports. DS4100/DS4300 - one or two server configurations only (2 or 4 HBAs) Each HBA pair must be connected to both A & B host-side controller ports. DS4400/DS4500 - one or two server configurations only (2 or 4 HBAs) Each HBA pair must be connected to both A & B controllers. Only 1 connection on each host-side mini-hub can be used. * Single HBA configurations are allowed, but each single HBA configuration requires that both controllers in the IBM DS storage subsystem be connected to the host. In a switch environment, both controllers must be connected to the switch within the same SAN zone as the HBA. In a direct-attach configurations, both controllers must be "daisy- chained" together. This can only be done on the DS4400/DS4500 storage servers. * Multiple host attachment to a host-side mini-hub is not supported. * Storage Partitions and the Default Host Group DS3000/DS4000/DS5000 offers a premium feature called "Storage Partitioning" which enables users to associate a set of logical drives on a Storage Server that can only be accessed by specified hosts and host ports. This association of logical drives to a set of hosts and host ports is called a Storage Partition. The benefit of defining Storage Partitions is to allow controlled access to the logical drives on the DS3950/DS4000/DS5000 storage subsystem to only those hosts also defined in the Storage Partition. Without the use of Storage Partitioning, all logical drives appear within what is called the Default Host Group, and they can be accessed by any fibre channel initiator that has access to the DS3950/DS4000/DS5000 host port. When homogeneous host servers are directly attached to the DS3950/DS4000 /DS5000 storage subsystem, access to all logical drives may be satisfactory, or when attached to a SAN, zoning within the fabric can be used to limit access to the Ds3950/DS4000/DS5000 host ports to specific set of hosts. The DS4300 standard product (without the Turbo feature) provides the Default Host Group. Premium features are available to support four, eight, or sixteen Storage Partitions. If logical drive access control is required for your configuration, particularly in a SAN or multiple server environments, then it is recommended that you add the option for Storage Partitions. On other DS4000/DS5000 storage subsystems which include a minimum number of Storage Partitions, Storage Partitioning should be used when configuring logical drives and hosts. * Booting from a DS4000/DS5000 subsystem utilizing SATA drives for the boot image is supported but not recommended due to performance reasons. * Boot images can be located on a volume in Ds3950/DS4000/DS5000 partitions that have greater than 32 LUNs per partition if utilizing AIX release CD's 5.2 ml-4 (5.2H) and 5.3.0 ml-0 or above. These CD's contain drivers that support greater than 32 LUNS per partition. If older CD's are being used the boot image must reside in a partition with 32 LUNS or less. AIX 5.1 does not support boot images on the IBM DS storage subsystem. * Dynamic Volume/logical drive Expansion (DVE) is only supported on AIX 5.2 and above. AIX 5.3 must have PTF U499974 installed before using DVE. * When booting from a DS3950/DS4000/DS5000 device, both paths to the boot device must be up and operational. Single HBA configurations are supported, but must have both controllers connected, as described above. * Path fail-over is not supported during the AIX boot process. Once the AIX host has booted, fail-over operates normally. * Interoperability with tape devices is supported on separate HBA and switch zones. * Interoperability with IBM 2105 and SDD Software is supported on separate HBA and switch zones. * Online concurrent firmware and NVSRAM upgrades of Storage Subsystem are only supported when upgrading from 06.xx.xx.xx to another version of 06.xx.xx.xx or from 07.xx.xx.xx to 07.xx.xx.xx. It is highly recommended that Online FW upgrades be scheduled during low I/O loads. Upgrading firmware from 05.xx.xx.xx to version 06.xx.xx.xx and from 06.xx.xx.xx to 07.xx.xx.xx must be performed with no IOs. There is no work-around. * When using FlashCopy, the Repository Volume failure policy must be set to "Fail FlashCopy logical drive", which is the default setting. The "Fail writes to base logical drive" policy is not supported on AIX, as data could be lost to the base logical drive. * VolumeCopy must be used in conjunction with Flashcopy. The VolumeCopy source volume must be a FlashCopy copy. Refer to the IBM System Storage DS Storage Manager version 10.8 Storage Manager Installation and Host Support Guide or IBM System Storage DS Storage Manager version 10 Storage Manager Installation and Host Support Guide for more information. * Do not perform other storage management tasks, such as creating or deleting logical drives, reconstructing arrays, and so on, while downloading the storage subsystem controller firmware and ESM firmware. It is recommended that you close all storage management sessions (other than the session that you use to upgrade the firmware) to the DS3950/DS4000/DS5000 storage subsystem that you plan to update. * All enclosures (including IBM DS storage subsystem with internal drive slots) on any given drive loop/channel should have complete unique ID's, especially the single digit (x1) portion of the ID, assigned to them. For example, in a maximum configured DS4500 storage subsystem, enclosures on one redundant drive loop should be assigned with id's 10-17 and enclosures on the second drive loop should be assigned with id's 20-27. Enclosure id's with the same single digit such as 11, 21 and 31 should not be used on the same drive loop/channel. In addition, for enclosures with mechanical enclosure ID switch like DS4300 storage subsystems, EXP100 or EXP710 storage expansion enclosures, do not use enclosure ID value of 0. The reason is with the physical design and movement of the mechanical enclosure ID switch, it is possible to leave the switch in a “dead zone?between ID numbers, which return an incorrect enclosure ID to the storage management software. The most commonly returned enclosure ID is 0 (zero). In addition to causing the subsystem management software to report incorrect enclosure ID, this behavior also result in enclosure ID conflict error with the storage expansion enclosure or IBM DS storage subsystem intentionally set the ID to 0. The DS4200 and DS4700 storage subsystems and the EXP420, EXP810, and EXP5000 storage expansion enclosures do not have mechanical ID switches. Thus, they are not susceptible to this problem. In addition, these storage subsystems and storage expansion enclosures automatically set the Enclosure IDs. IBM recommendation is not make any changes to these settings unless the automatic enclosure ID settings resulting in non- unique single digit settings for enclosures (including the storage subsystems with internal drive slots) in a given drive loop/channel. * If there is previous version (8.x or 9.x) of the IBM DS storage subsystem Storage Manager host software (ie. SMRuntime, SMClient, RDAC, SMUtil and SMAgent packages) installed in the system, you have to uninstall it first before installing the new version of the storage manager software. Or you may use the install wizard to overwrite these packages. Refer to the IBM System Storage?IBM System Storage DS Storage Manager version 10.8 Storage Manager Installation and Host Support Guide or the IBM System Storage DS Storage Manager version 10 Storage Manager Installation and Host Support Guide for more information. 3.1 Helpful Hints ------------------------------- 1. The DS4500 and DS4300 storage subsystem have updated recommended drive-side cabling instructions with controller firmware 06.23.xx.xx. The DS4500 instructions are documented in the IBM System Storage DS4500 Installation, Users, and Maintenance Guide (GC27-2051-00 or IBM P/N 42D3302). The DS4300 instructions are documented in the IBM System Storage DS4300 Installation, Users, and Maintenance Guide (GC26-7722-02 or IBM P/N 42D3300). Please follow the cabling instructions in these publications to cable the new DS4500 and DS4300 setup. If you have an existing DS4500 setup with four drive side minuhub installed that was cabled according to the previously recommended cabling instructions, please schedule down time as soon as possible to make changes to the drive side FC cabling. Refer to the IBM System Storage DS4500 and DS4300 Installation, Users, and Maintenance Guide for more information. 2. The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously, some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes. 3. IBM recommends at least one hot spare per EXPxxxx drive expansion enclosure. 4. If you are unable to see the maximum number of drives during Automatic Configuration, you should be able to use Manual Configuration to select individual drives and select the maximum number of drives allowed. 5. The DS3950/DS4000/DS5000 controller host ports or the Fibre Channel HBA ports can not be connected to a Cisco FC switch ports with "trunking" enabled. You might encounter failover and failback problems if you do not change the Cisco FC switch port to "non-trunking" using the following procedure: a. Launch the Cicso FC switch Device Manager GUI. b. Select one or more ports by a single click. c. Right click the port(s) and select Configure, a new window pops up d. Select the "Trunk Config" tab from this window, a new window opens e. In this window under Admin, select the "non-trunk" radio button, it is set to auto by default. f. Refresh the entire fabric. 6. Serial connections to the IBM DS storage subsystem controller must be set to a baud rate of either 38400 or 57600. Do not make any connections to the IBM DS storage subsystem serial ports unless instructed by IBM Support. Incorrect use of the serial port might result in loss of configuration and/or loss of data. 7. Starting with the IBM DS Storage Manager (SM) host software version 9.12 or later, the Storage Manager client script window looks for the files with the file type of ".script" as the possible script command files. In the previous versions of the IBM DS Storage Manager host software, the script window looks for the file type ".scr" instead. (i.e. enableAVT.script for SM 9.12 or later vs. enableAVT.scr for pre-SM 9.12) 8. Do not delete the Access LUN or Access Volume if you want to manage the IBM DS storage subsystem in-band (host-agent managed). The Access LUN is required by the SMClient to communicate with the storage controllers when using the in-band management method. 9. Fabric topology zoning requirement with AIX fcp_array (RDAC) and Solaris RDAC only. To avoid possible problem at the host level, it is best practice that all Fibre Channel (FC) Switches must be zoned such that a single FC host bus adapter can only access one controller per storage array. In addition, this zoning requirement also ensures the maximum number of host connections can be seen and log into the controller FC host port. This is because if a FC HBA port is seen by both controller A and B host ports, it will be counted as two host connections to the storage subsystem - one for controller A port and one for controller B port. Note: The DS4000 storage subsystems DS4500, DS4400 and FAStT500 (IBM machine type 1742 and 3552) have two ports per controller - one per minihub slot. The DS4000 storage subsystems DS4300 (IBM machine type 1722) and DS4100 (IBM machine type 1724) have two ports per controller. The DS4000 storage Server FAStT200 (IBM machine type 3542) has only one port per controller. The DS4700 storage subsystem (IBM machine type 1814) has up to four ports per controller. The DS4800 storage subsystem (IBM machine type 1815) has four ports per controller. 10. All enclosures (including IBM DS storage subsystem with internal drive slots) on any given drive loop/channel should have complete unique ID's, especially the single digit (x1) portion of the ID, assigned to them. For example, in a maximum configured DS4500 storage subsystem, enclosures on one redundant drive loop should be assigned with id's 10-17 and enclosures on the second drive loop should be assigned with id's 20-27. Enclosure id's with the same single digit such as 11, 21 and 31 should not be used on the same drive loop/channel. In addition, for enclosures with mechanical enclosure ID switch like DS4300 storage subsystems, EXP100 or EXP710 storage expansion enclosures, do not use enclosure ID value of 0. The reason is with the physical design and movement of the mechanical enclosure ID switch, it is possible to leave the switch in a “dead zone?between ID numbers, which return an incorrect enclosure ID to the storage management software. The most commonly returned enclosure ID is 0 (zero). In addition to causing the subsystem management software to report incorrect enclosure ID, this behavior also result in enclosure ID conflict error with the storage expansion enclosure or IBM DS storage subsystem intentionally set the ID to 0. The DS4200 and DS4700 storage subsystems and the EXP420 and EXP810 storage expansion enclosures did not have mechanical ID switches. Thus, they are not susceptible to this problem. In addition, these storage subsystems and storage expansion enclosures automatically set the Enclosure IDs. IBM recommendation is not make any changes to these settings unless the automatic enclosure ID settings resulting in non- unique single digit settings for enclosures (including the storage subsystems with internal drive slots) in a given drive loop/channel. ======================================================================= 3.2 Configuration settings ------------------------------ 1. By default, the IBM DS Storage Manager Client does not automatically map logical drives when the IBM DS3000/DS4000/DS5000 storage partitioning premium feature is enabled. This means that the logical drives after being created are not automatically presented to the host systems. a. For a new installation, after creating new arrays and logical drives, create a storage partition with the host type of AIX and map the logical drives to this partition or change the default host type to AIX if the Storage Partitioning premium feature is not enable. b. If you are upgrading the NVSRAM with Storage Partitions, you may have to change the default host type to match the host system OS. After upgrading the NVSRAM, the default host type is reset to Windows 2000/Server 2003 non-clustered for IBM DS storage subsystems with controller firmware version 06.14.xx.xx or later. For IBM DS storage subsystems with controller firmware version 06.12.xx.xx or earlier, it is reset to Windows non-clustered (SP5 or higher), instead. Refer to the IBM DS Storage Manager online help to learn more about creating storage partitions and changing host types. 3.3 Unsupported Configurations -------------------------------------------- The configurations that are currently not being supported with IBM DS Storage Manager Version 10.83 or later are listed below: 1. The IBM EXP395 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS3950. EXP810 drive enclosures are also supported in the DS3950 with the purchase of a premium feature key. 2. The IBM EXP520 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5020. EXP810 drive enclosures are also supported in the DS5020 with the purchase of a premium feature key. 3. The IBM EXP5000 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5100 and DS5300. 4. The DS4100 (machine type 1724-all models) storage subsystem does not support the attachment of the DS4000 EXP710, EXP700 and EXP500 (FC) drive expansion enclosure. 5. The DS4800 storage subsystem (machine type 1815-all models) does not support the attachment of the FAStT EXP500 and DS4000 EXP700 drive expansion enclosures. 6. The DS4200 (machine type 1814 - models 7VA/H) does not support the attachment of the DS4000 EXP100 (SATA), EXP710 (FC) and EXP810 (SATA and FC) drive expansion enclosures. In addition, it does not support Fibre Channel disk drive options. 7. The IBM DS4000 EXP420 Expansion Enclosure is not supported attached to any other IBM DS4000 Storage Subsystems except the DS4200. 8. The DS4100 with Single Controller option does not support the attachment of the DS4000 storage expansion enclosures. 9. The DS5100 and DS5300 storage subsystems do not support the attachment of the DS4000 EXP100, EXP700, EXP710 drive expansion enclosures. The EXP810 is only supported through an RPQ process. 10. The DS5000 EXP5000 drive expansion enclosure is supported attached to the DS5100 and DS5300 only. 11. The DS4700 and DS4800 storage subsystems do not support the attachment of the DS4000 EXP700 drive expansion enclosures. The EXP700 enclosure must be upgraded into DS4000 EXP710 enclosure using the DS4000 EXP700 Models 1RU/1RX Switched-ESM Option Upgrade Kit before it can be attached to the DS4700 and DS4800 storage subsystems. 12. The DS4300 storage subsystem with Single Controller option does not support the controller firmware version 06.xx.xx.xx. The correct firmware version for these DS4300 storage subsystem models is 05.34.xx.xx. 13. Fibre Channel loop environments with the IBM Fibre Channel Hub, machine type 3523 and 3534, in conjunction with the IBM Fibre Channel Switch, machine types 2109-S16, 2109-F16 or 2109-S8. In this configuration, the hub is connected between the switch and the IBM Fibre Channel RAID Controllers. 14. The IBM Fibre Channel hub, machine type 3523, connected to IBM machine type 1722, 1724, 1742, 1814, 1815, 3542 and 3552. 15. A configuration in which a server with only one FC/SAS host bus adapter connects directly to any IBM DS storage subsystem with dual controllers is not supported. The supported configuration is the one in which the server with only one FC/SAS host bus adapter connects to both controller ports of any IBM DS storage subsystem with dual controllers via FC/SAS switch (SAN-attached configuration.) ======================================================================= 4.0 Unattended Mode ------------------------ N/A ======================================================================= 5.0 WEB Sites and Support Phone Number -------------------------------------------------------------- 5.1 IBM System Storage Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 5.2 IBM System Storage Marketing Web Site: http://www.ibm.com/systems/storage/ 5.3 IBM System Storage Interoperation Center (SSIC) web site: http://www.ibm.com/systems/support/storage/ssic/ 5.4 You can receive hardware service through IBM Services or through your IBM reseller, if your reseller is authorized by IBM to provide warranty service. See http://www.ibm.com/planetwide/ for support telephone numbers, or in the U.S. and Canada, call 1-800-IBM-SERV (1-800-426- 7378). IMPORTANT: You should download the latest version of the DS Storage Manager host software, the DS storage subsystem controller firmware, the DS drive expansion enclosure ESM firmware and the drive firmware at the time of the initial installation and when product updates become available. For more information about how to register for support notifications, see the following IBM Support Web page: ftp.software.ibm.com/systems/support/tools/mynotifications/overview.pdf You can also check the Stay Informed section of the IBM Disk Support Web site, at the following address: www.ibm.com/systems/support/storage/disk ======================================================================= 6.0 Trademarks and Notices -------------------------- The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM AIX DS3000 DS3500 DCS3700 DCS3860 DS4000 DS5000 FAStT System Storage the e-business logo xSeries pSeries HelpCenter UNIX is a registered of The Open Group in the United States and other countries. Microsoft, Windows, and Windows NT are of Microsoft Corporation in the United States, other countries, or both. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. ======================================================================= 7.0 Disclaimer -------------- 7.1 THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. 7.2 Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corporation.