IBM System Storage subsystem controller firmware version 07.77.38.00 for the DS3500-all models, DCS3700-all models (except Performance Module Controller), DS3950-all models, DS5020-all models, DS5100-all models, and DS5300-all models storage subsystems. ===================================== BEFORE INSTALLING 07.77.38.00, PLEASE VERIFY ON SSIC (URL BELOW) THAT YOUR STORAGE ENVIRONMENT IS SUPPORTED: http://www.ibm.com/storage/support/config/ssic ===================================== Important: A problem causing recursive reboots exists while using 7.36.08 and 7.36.12 firmware on IBM System Storage DS4000 or DS5000 systems. This problem is fixed in 7.36.14.xx and above firmware. All subsystems currently using 7.36.08 and 7.36.12 firmware MUST run a file system check tool (DbFix) before and after the firmware upgrade to 7.36.14.xx or later. Instructions for obtaining and using DbFix are contained in the 7.36.14.xx or later firmware package. Carefully read the firmware readme and the DbFix instructions before upgrading to firmware 7.36.14.xx or later. For subsystems with firmware level 7.36.08 or 7.36.12, configuration changes should be avoided until a firmware upgrade to 7.36.14.xx or later has been completed successfully. Subsystems not currently using 7.36.08 or 7.36.12 do not need to run DbFix prior to upgrading to 7.36.14.xx or later. DbFix may be run after upgrading to 7.36.14.xx or later, but it is not required. DbFix is only applicable to subsystems using 7.36.xx.xx or greater firmware. If problems are experienced using DbFix or the resulting message received is check Failed, DO NOT upgrade your firmware and contact IBM support before taking any further actions. (C) Copyright International Business Machines Corporation 1999, 2012. All rights reserved. US Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Note: Before using this information and the product it supports, read the general information in section 6.0 "Trademarks and Notices" in this document. Note: Before commencing with any firmware upgrade procedure, use the Storage Manager client to perform a Collect All Support Data capture. Save this support data capture on a system other than the one that is being upgraded. Refer to the IBM System Storage™ Support Web Site or CD for the IBM DS Storage Manager version 10 Installation and Host Support Guide for firmware and NVSRAM download instructions. For other related publications, refer to Related Publications in the Installation, User's and Maintenance guide of your DS storage subsystem or storage expansion enclosures. Last Update: 11/19/2012 Products Supported: --------------------------------------------------------------- | New Model | Old Model | Machine Type | Model | |------------|-----------|--------------|-----------------------| | DS3500 | N/A | 1746 | C2A, A2S, A2D, C4A, | | | | | A4S, A4D | |------------|-----------|--------------|-----------------------| | DCS3700 | N/A | 1818 | 80C | |------------|-----------|--------------|-----------------------| | DS3950 | N/A | 1814 | 94H, 98H | |------------|-----------|--------------|-----------------------| | DS5020 | N/A | 1814 | 20A | |------------|-----------|--------------|-----------------------| | DS5100 | N/A | 1818 | 51A | |------------|-----------|--------------|-----------------------| | DS5300 | N/A | 1818 | 53A | --------------------------------------------------------------- Supported Enclosure Attachments: The DCS3700-80C supports the attachment of the DCS3700-80E drive enclosure only. The DS3950 supports the attachment of EXP395 drive enclosures. EXP810 drive enclosure attachment is supported as a premium feature and will require a premium feature key. The DS5020 supports the attachment of EXP520 drive enclosures. EXP810 drive enclosure attachment is supported as a premium feature and will require a premium feature key. The DS5100 and DS5300 supports the attachment of EXP5000 and the EXP5060 drive enclosures. An IBM RPQ approval is required for support of all EXP810 migration configurations with the DS5100 and DS5300. Notes: The following table shows the first four digits of the latest controller firmware versions that are currently available for various models of the DS3500/DS3950/DS4000/DS5000 storage subsystems. --------------------------------------------------- | IBM Storage | Controller firmware version | | Subsystem Model | | |------------------|--------------------------------| | DS5300 (1818) | 07.77.xx.xx | |------------------|--------------------------------| | DS5100 (1818) | 07.77.xx.xx | |------------------|--------------------------------| | DS5020 (1814) | 07.77.xx.xx | |------------------|--------------------------------| | DS4800 (1815) | 07.60.xx.xx | | | (07.60.28.00 or later only) | |------------------|--------------------------------| | DS4700 (1814) | 07.60.xx.xx | | | (07.60.28.00 or later only) | |------------------|--------------------------------| | DS4500 (1742) | 06.60.xx.xx | |------------------|--------------------------------| | DS4400 (1742) | 06.12.56.xx | |------------------|--------------------------------| | DS4300 Turbo | 06.60.xx.xx | | Option (1722) | | |------------------|--------------------------------| | DS4300 | 06.60.xx.xx | | Standard Option | | | (1722) | | |------------------|--------------------------------| | DS4200 (1814) | 07.60.xx.xx | | | (07.60.28.00 or later only) | |------------------|--------------------------------| | DS4100 (1724) | 06.12.56.xx | | (standard dual | | | Single | | | controller Opt.)| | | --------------------------------------------------| | DS3950 (1814) | 07.77.xx.xx | |------------------|--------------------------------| | DCS3700 (1818) | 07.77.xx.xx | |------------------|--------------------------------| | DS3500 (1746) | 07.77.xx.xx | --------------------------------------------------- ATTENTION: 1. The DS4300 with Single Controller option(M/T 1722-6LU,6LX,and 6LJ), FAStT200 (M/T 3542-all models) and FAStT500 (M/T 3552-all models) storage subsystems can no longer be managed by DS Storage Manager version 10.50.xx.23 and later. 2. For the DS3x00 storage subsystems, please refer to the readme files that are posted in the IBM DS3000 System Storage support web site for the latest information about their usage, limitations or configurations. http://www.ibm.com/systems/support/storage/disk ======================================================================= CONTENTS -------- 1.0 Overview 2.0 Installation and Setup Instructions 3.0 Configuration Information 4.0 Unattended Mode 5.0 Web Sites and Support Phone Number 6.0 Trademarks and Notices 7.0 Disclaimer ======================================================================= 1.0 Overview -------------- The IBM System Storage controller firmware version 07.77.38.00 release includes the storage subsystem controller firmware and NVSRAM files for the DS3500-all models, DCS3700-all models (except Performance Module Controller), DS3950-all models, DS5020-all models, the DS5100-all models, and the DS5300-all models. The IBM DS Storage Manager host software version 10.77.x5.28 or later is required to manage DS storage subsystems with controller firmware version 7.77.38.00 installed. ATTENTION: DO NOT DOWNLOAD THIS CONTROLLER FIRMWARE ON ANY OTHER DS3000, DS4000 or DS5000 STORAGE SUBSYSTEM MODELS. The IBM System Storage DS Storage Manager version 10 Installation and Host Support Guide is available on IBM's Support Web Site as a down-loadable Portable Document Format (PDF) file. In addition, the FC/SATA intermix premium features and the Copy Services premium features; FlashCopy, VolumeCopy, and Enhanced Remote Mirroring are separately purchased options. The Storage partitioning premium feature is standard on all IBM DS3500, DS3950, DS4000 and DS5000 storage subsystems with the exception of the IBM DS4100 (machine type 1724 with Standard or Single Controller options) and the DS4300 (machine type 1722 with Standard or Single Controller options) storage subsystems. Please contact IBM Marketing representatives or IBM resellers if you want to purchase additional Storage partitioning options for supported models. See section 3.3 "Helpful Hints" for more information. Refer to the IBM Support Web Site for the latest Firmware and NVSRAM files and DS Storage Manager host software for the IBM DS Storage Subsystems. http://www.ibm.com/systems/support/storage/disk New features that are introduced with the controller firmware version 07.77.xx.xx or later will not be available for any DS5000/DS3000 storage subsystem controllers without the controller firmware version 07.77.xx.xx or later installed. The following table shows the controller firmware versions required for attaching the various models of the DS3500/DCS3700/DS3950/DS4000/DS5000 Storage Expansion Enclosures. --------------------------------------------------------------------- | Controller | EXP Storage Expansion Enclosures | | FW Version |------------------------------------------------------- | | EXP100 | EXP420 | EXP500 | EXP700 | EXP710 | EXP810 | |------------|---------|--------|--------|--------|--------|--------| |5.3x.xx.xx | No | No | Yes | Yes | No | No | |------------|---------|--------|--------|--------|--------|--------| |5.40.xx.xx | No | No | Yes | Yes | No | No | |------------|---------|--------|--------|--------|--------|--------| |5.41.xx.xx | Yes | No | No | No | No | No | |------------|---------|--------|--------|--------|--------|--------| |5.42.xx.xx | Yes | No | No | No | No | No | |------------|---------|--------|--------|--------|--------|--------| |6.10.0x.xx | Yes | No | Yes | Yes | No | No | |------------|---------|--------|--------|--------|--------|--------| |6.10.1x.xx | Yes | No | Yes | Yes | Yes | No | |------------|---------|--------|--------|--------|--------|--------| |6.12.xx.xx | Yes | No | Yes | Yes | Yes | No | |------------|---------|--------|--------|--------|--------|--------| |6.14.xx.xx | Yes | No | No | No | Yes | No | |------------|---------|--------|--------|--------|--------|--------| |6.15.xx.xx | Yes | No | No | No | Yes | No | |------------|---------|--------|--------|--------|--------|--------| |6.16.2x.xx | No | No | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |6.16.8x.xx | No | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |6.16.9x.xx | No | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |6.19.xx.xx | Yes | No | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |6.23.xx.xx | Yes | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |6.60.xx.xx | Yes | Yes | No | Yes | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |7.10.xx.xx | Yes | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |7.15.xx.xx | Yes | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |7.30.xx.xx | No | No | No | No | No | No | |------------|---------|--------|--------|--------|--------|--------| |7.36.xx.xx | Yes | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |7.50.xx.xx | Yes | Yes | No | No | Yes | Yes | |------------|---------|--------|--------|--------|--------|--------| |7.60.xx.xx | Yes | Yes | No | No | Yes | Yes | |-------------------------------------------------------------------| |7.70.xx.xx | No | No | No | No | No | Yes | |-------------------------------------------------------------------| |7.77.xx.xx | No | No | No | No | No | Yes | --------------------------------------------------------------------- -------------------------------------------------------------------------------- | Controller |EXP Storage Expansion | | FW Version |Enclosures | | |-----------------------------------------------------------------| | | EXP520 | EXP5000 | EXP395 | EXP5060 | EXP3500 | DCS3700 | | | | | | | | 80E | |------------|---------|------------|---------|------------|---------|---------| |7.30.xx.xx | No | Yes | No | No | No | No | |------------|---------|------------|---------|------------|---------|---------| |7.36.xx.xx | No | Yes | No | No | No | No | |------------|---------|------------|---------|------------|---------|---------| |7.50.xx.xx | No | Yes | No | No | No | No | |------------|---------|------------|---------|------------|---------|---------| |7.60.xx.xx | Yes | Yes | Yes | Yes | No | No | |------------|---------|------------|---------|------------|---------|---------| |7.70.xx.xx | Yes | Yes | Yes | Yes | Yes | No | |------------|---------|------------|---------|------------|---------|---------| |7.77.xx.xx | Yes | Yes | Yes | Yes | Yes | Yes | -------------------------------------------------------------------------------- The following table shows the storage subsystem (controller module) and minimum controller firmware versions required for T10-PI support. Note: To have T10-PI work correctly, please use drives that state in the feature list that they are T10-PI capable. Also, Refer to 1.4 Dependencies for further restrictions on particular drive supportability information. --------------------------------------------------------------- | New Model | Machine Type | T10-PI | Controler FW version | |------------|--------------|----------|-----------------------| | DS3500 | 1746 | No | N/A | | | | | | |------------|--------------|----------|-----------------------| | DCS3700 | 1818 | No | N/A | |------------|--------------|----------|-----------------------| | DS3950 | 1814 | Yes | 7.77.xx or later | |------------|--------------|----------|-----------------------| | DS5020 | 1814 | Yes | 7.77.xx or later | |------------|--------------|----------|-----------------------| | DS5100 | 1818 | Yes | 7.77.xx or later | |------------|--------------|----------|-----------------------| | DS5300 | 1818 | Yes | 7.77.xx or later | --------------------------------------------------------------- 1.1 Limitations --------------------- IMPORTANT: The listed limitations are cumulative. However, they are listed by the storage subsystem controller firmware and Storage Manager host software releases to indicate which controller firmware and Storage Manager host software release that they were first seen and documented. Note: For limitations in certain operating system environments, refer to the readme file that is included in the DS Storage Manager host software package for that operating system environment. Limitations with version 07.77.38.00 release. N/A Limitations with version 07.77.34.00 release. N/A Limitations with version 07.77.18.00 release. 1. "Transferred on" date for Pending configuration is incorrect in AMW physical tab. The timestamp displayed for the staged-firmware image will be incorrect. This problem can be avoided by ensuring the controller’s real-time clock is correct before upgrading firmware. 2. ESM State capture info is truncated during support bundle collection. An additional support-bundle can be collected to obtain valid ESM capture-data. If this issue occurs, the support-bundle that is captured will be missing some information in the ESM state-capture. Limitations with version 07.70.38.00 release. 1. The ETO setting of the 49Y4235 - Emulex Virtual Fabric Adapter (CFFh) for IBM BladeCenter HBA must be manually set to 144 second. Please review the publications that is shipped with the card for instruction to change the setting. Limitations with version 07.70.23.00 release. 1. FDE Drive in “Security Locked” state is reported as “incompatible” state in the drive profile. The drive can be recovered by importing the correct lock key. 2. The condition when LUNs within a Raid array have split ownership between the two controllers could possibly see false positives on synthesized PFA has been corrected by CR151915. The stagnant I/O will be discarded. 3. ESX 3.5.5: Filesystem I/O in SLES11 VM failed during CFW upgrade in ESX35U5 + P20 patches. Restriction: Running I/O with SLES11 VM filesystem while updating controller firmware in VMware 3.5U5 + P20 patch. Impact: User will see I/O error and Filesystem volumes in SLES11 VMs will be changed to read-only mode. Work-around: User can either perform controller FW upgrade with no I/O running on SLES11 VM or with no Filesystem created in SLES11 VMs. Mitigation: User can unmount the filesystem and remount the filesystem, then I/O should be able to restart. 4. ESX 4.1: I/O failed on Filesystem volume in SLES11 VM on ESX41. Restriction: Running I/O with SLES11 VM filesystem while updating controller firmware in VMware 4.1 env. Impact: User will see IO error and Filesystem volumes in SLES11 VMs will be changed to readonly mode. Work-around: User can either perform controller firmware upgrade with no I/O running on SLES11 VM or with no Filesystem created in SLES11 VMs. Mitigation: User can unmount the filesystem and remount the filesystem, then I/O should be able to restart. Data Error in Head and Tail Sector is not matching during FcChipLip test with SANbox. Restriction: Need excess host and drive side chips reset simultaneously for a long time on ESX4.1, Qlogic QLE2562 and SANbox 5800 Impact: The LBA stored in the header or tail does not match the expected LBA. Work-around: N/A Mitigation: None I/O's failed in SLES10.2 VM during Controller FW Upgrade test on ESX41 with Brocade HBA. Restriction: Avoid online concurrent controller download with Brocade 8gb HBAs (8x5) on all guest OSes. Impact: User will often see this issue if controller firmware upgrade is performed with I/O. Work-around: This issue occurs on various guest OSes so to avoid the issue, user should perform an offline (no I/O to controllers) controller firmware upgrade. Mitigation: Reboot the failed VM host. Bladecenter: I/O errors reported on Linux VMs during automated CFW download. Restriction: Running IO with SLES11 VM filesystem while updating CFW in VMware 4.1 env. Impact: User will see IO errors on Linux Virtual machines if they upgrade Controller Firmware while they have active I/Os. Work-around: Stop all I/Os to the array from the ESX4.1 host (at least from the Linux Virtual Machines) during Controller Firmware Downloads. Mitigation: Users will have to reissue the I/O after controller firwmare updating has completed. RHEL5.5 RHCS Node stuck during shutdown when running nodefailover test. Restriction: avoid soft-reboot with RHEL5.5 x64, IA64, PPC with Redhat Cluster Suite. Need to do hard-boot. Impact: The node will hang indefinitely and will not come back online. Work-around: physically power it off. Mitigation: Will need to turn the node off manually, by physically powering it off. RHEL5.5 RHCS gfs failed to mount the filesystems during node bootup. Restriction: avoid node reboot on RH5.5 x64, IA64, PPC, Redhat cluster suite env with nodes running GFS. Impact: The node failed to remount all the external storage disks file systems (GFS) during boot up. Applications will lost access to the cluster’s resource when the 2nd node reboot/offline. Work-around: create a script to sleep for 30 sec and allow it executes at startup. Mitigation: Restart clvmd after boot , then restart gfs. Limitations with version 07.60.28.00 release. 1. The 7.60.xx release includes the synthesized drive PFA feature. This feature will provide a critical alert if drive IOs respond slower than expected 30 times within a one hour period. There is a condition when LUNs within a RAID array have split ownership between the two controllers where false positives could be received on this. 2. When repeatedly remove and re-insert the fan module in the EXP5060 drive expansion enclosure, the fan module amber fault LED might stay lit and the recovery guru in the Storage manager might report that the fan is bad whereas the fan module is working perfectly fine. This condition persists even when the fan is replaced with a new fan module. Because the fan was removed and re-insert many times, the controller might run into timing problem between replacing the fan module and when the controller polls the ESM for status updates and misinterprets the status of the removed fan as “failed”. That status bit is never reset, causing the fan amber fault led to remain on and failed status to persist even after a new fan CRU is installed. If a replacement fan module exhibits these symptoms, place your hand over the fan and compare the air flow to the fan with good status on the opposite side of the enclosure. If air flow is similar you can assume the fan with failed status working properly. Schedule time for the DS subsystem can be placed offline so that the whole storage subsystem configuration can be power cycled to clear the incorrect status bit. Limitations with version 07.60.13.05 release. 1. When modifying the iSCSI host port attributes there can be a delay of up to 3 minutes if the port is inactive. If possible, connect a cable between the iSCSI host port and a switch prior to configuring any parameters. 2. When doing a Refresh DHCP operation in the Configure iSCSI port window, and the iSCSI port is unable to contact the DHCP server, the following inconsistent informational Mel events can be reported; - MEL 1810 DHCP failure - MEL 1807 IP address failure - MEL 1811 DHCP success. For this error to occur, the following specific conditions and sequence must have been met: - Before enabling DHCP, static addressing was used on the port and that static address is still valid on the network. - The port was able to contact the DCHP server and acquire an address. - Contact with the DHCP server is lost. - The user performs the Refresh DHCP operation from DS Storage Manager Client. - Contact with the DHCP server does not come back during the DHCP refresh operation and timeouts. Check the network connection to your iSCSI port and the status of the DHCP server before attempting the Refresh DHCP operation again. Limitations with version 07.60.13.00 release. 1. Brocade HBA does not support direct attach to storage subsystem. 2. Tivoli Productivity Center server does not retrieve the IPv6 address from an SMI-S provider on an IPv6 host. You must use an IPv4 host. 3. Linux clients in a VIOS environment should change their error recovery timeout value to 300 seconds and their device timeout value to 120 seconds. Default ibmvscsic error recovery timeout = 60, the command to change it to 300 is: echo 300 > /sys/module/ibmvscsic/parameters/init_timeout Default device timeout = 30, the command to change it to 120 is: echo 120 > /sys/block/sdb/device/timeout 4. Maximum sustainable IOPs falls slightly when going above 256 drives behind a DS5100 or DS5300. Typically adding spindles improves performance by reducing drive side contention. Not all IO loads will benefit by adding hardware. 5. Event mechanism cannot support level of events submitted to keep consistency group state parallel with repeated reboots and auto resync for more than 32 RVM LUNs. Although we support the user setting RVM consistency group volumes to auto resync, we advise not to use this setting as it can defeat the purpose of the consistency group in a disaster recovery situation. If the customer must do this then the number of RVM LUNs which are in a consistency group and also have auto resync set should be limited to 32. 6. If you switch from IPv6 to IPv4 and have an iSNS server running, you could see IPv6 address still showing up on the iSNS server. To clear this situation, disable then enable iSNS on the controllers after you disabled IPv6 support. 7. Long controller Start of Day times have been observed on the iSCSI DS5020 with a large number of mirrors configured with an Asynchronous w/ Write Order Consistency mirroring policy. Limitations with version 07.50.13.xx release. 1. None Limitations with version 07.50.12.xx release. 1. Start of day (a controller reboot) can take a very long time, up to 20 minutes, after a subsystem clear configuration on a DS5000 subsystem. This occurs on very large configurations, now that the DS5000 can support up to 448 drives, and can be exacerbated if there are SATA drives. 2. Veritas cluster server node failure when fast fail is enabled. Recommend setting dmp_fast_fail HBA flag to OFF. Frequency of this causing a node failure is low unless the storage subsystem is experiencing repeated controller failover conditions. 3. Brocade HBA does not support direct attach to storage subsystem. 4. At times the link is not restored when inserting a drive side cable into a DS5000 controller, a data rate mismatch occurs. Try reseating the cable again to clear the condition. 5. Due to a timing issue with controller firmware, using SMcli or the script engine to create LUNs and set LUN attributes will sometimes end with a script error. 6. Mapping host port identifiers to host via Script Editor hangs; fails via CLI with "... error code 1”. This will occur when using the CLI/script engine to create an initial host port mapping. 7. Under certain high stress conditions controller firmware upgrade will fail when volumes do not get transferred to the alternate controller quick enough. Upgrade controller firmware during maintenance windows or under low stress IO conditions. 8. Concurrent controller firmware download is not supported in storage subsystem environments with attached VMware ESX server hosts running a level of VMware ESX older than VMware ESX 3.5u5 p20. Limitations with version 07.36.17.xx release. None Limitations with version 07.36.14.xx release. None Limitations with version 07.36.12.xx release. 1. For current PowerHA/XD (formerly HACMP)and GPFS support information, please review the interoperability matrix found at: http://www-03.ibm.com/systems/storage/disk/ds4000/interop-matrix.html -or- www.ibm.com/systems/support/storage/config/ssic/index.jsp Limitations with version 07.36.08.xx release. 1. After you replace a drive, the controller starts the reconstruction process on the degraded volume group. The process starts successfully, and it progresses through half of the volume group until it reaches two Remote Volume Mirroring (RVM) repository volumes. The first RVM repository volume is reconstructed successfully, but the process stops when it starts to reconstruct the second repository volume. You must reboot the owning controller to continue the reconstruction process. Limitations with version 07.30.21.xx release. 1. For current PowerHA/XD (formerly HACMP)and GPFS support information, please review the interoperability matrix found at: http://www-03.ibm.com/systems/storage/disk/ds4000/interop-matrix.html -or- www.ibm.com/systems/support/storage/config/ssic/index.jsp 2. When migrating or otherwise moving controllers or drive trays between between systems, always quiesce IO and allow the cache to flush before shutting down a system and moving components. 3. When utilitizing remote mirrors in a write consistency group, auto- synchronization can fail to resynchronize when a mirror link fails and gets re-established. Manual synchronization is the recommended setting when mirrors are spread across multiple arrays. 4. Using 8KB segment size (default) with large IOs on a RAID-6 volume can result in an ancient IO as a single IO will span several stripes. Adjusting the segment size to better match the IO sizes such as 16KB will improve this performance. 5. Doing volume defragmentation during peak IO activity can lead to an IO error. It is recommended that any volume defrag is done during off-peak or maintenance windows. 6. Doing RAID migration in large configurations during peak IO activity can take a very long time to complete. It is recommended that RAID migration is done during off-peak or maintenance windows. 7. When using legacy arrays running 7.10.23.xx firmware as a remote mirror with a DS5000 array can have performance issues causing "Data on mirrored pair unsynchronized" errors. Updating the legacy controller to 07.15.07.xx resolves this issue. 8. The IPv6 dynamic Link-Local address for the second management port on the controller is not set when you toggle Stateless Autoconfig from Disabled to Enabled. The IPv6 dynamic Link-Local address is assigned when the controller is rebooted. 8. DS5000 storage subsystems support legacy EXP810 storage expansions. If you are moving these expansions from an existing DS4000 system to the DS5000 system as the only expansions behind the DS5000, the DS4000 system must be running 07.1x.xx.xx controller firmware first. 1.2 Enhancements ----------------- The DS Storage Manager version 10.77.xx.28 or later host software in conjunction with controller firmware version 7.77.38.00 and later provides Bug fixes to the 7.77.xx.xx thread of the controller code. Please see the change list file for more information about the fixes. Note: Host type VMWARE has been added to NVSRAM as an additional host type. DS4200 and DS4700 will use index 21 All other supported systems will use index 16 Although not required, if using a Linux host type for a VMWARE host, it is recommended to move to the VMWARE host type since any upgrading of controller firmware and NVSRAM would continue to require running scripts, whereas using the VMWARE host type does not require running scripts. - The controllers do not need to be rebooted after the change of host type. - The host will need to be rebooted. - Changing the host type should be done under low I/O conditions. 1.3 Prerequisites ------------------ The IBM DS Storage Manager host software version 10.77.x5.28 or later is required to manage DS3000, DS4000, and DS5000 storage subsystems with controller firmware version 07.77.38.00 installed. 1.4 Dependencies ----------------- The information in this section is pertinent to the controller firmware version 7.77.xx.xx release only. For the dependency requirements of previously released controller firmware versions, please consult the Dependencies section in the readme file that was packaged with each of those controller firmware releases. ATTENTION: 1. Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the storage/drive expansion enclosure ESM, the storage subsystem controller and the hard drive firmware. 2. The EXP810 and EXP520 ESM firmware version must be at or greater than 98C5. 3. The disk drives and ESM packages are defined in the Hard Disk Drive and ESM Firmware Update Package version 1.67 or later found at the IBM support web site. 4. Under certain high stress conditions controller firmware upgrade will fail when volumes do not get transferred to the alternate controller quick enough. Upgrade controller firmware during maintenance windows or under low stress IO(off-peak) conditions 5. The Storage Manager host software version 10.83.x5.18 or later is required for managing storage subsystems with 3 TB NL FC-SAS drives. Storage manager version 10.83.x5.18 or later in conjunction with controller firmware version 7.83.xx.xx and later allow the creation of T10PI-enabled arrays using 3 TB NL FC-SAS drives. 1.5 Level Recommendations ----------------------------------------- 1. Storage Controller Firmware versions: a. DS3500: FW_DS3500_07773800 b. DCS3700: FW_DS3500_07773800 c. DS3950: FW_DS3950_07773800 d. DS5020: FW_DS5020_07773800 e. DS5100: FW_DS5100_07773800 f. DS5300: FW_DS5300_07773800 2. Storage Controller NVSRAM versions: a. DS3500: N1746D35R0777V05.dlp (dual controller) N1746D35L0777V05.dlp (single controller) b. DCS3700 N1818D37R0777V05.dlp c. DS3950: N1814D50R0777V05.dlp d. DS5020: N1814D20R0777V05.dlp e. DS5100: N1818D51R0777V05.dlp f. DS5300: N1818D53R0777V05.dlp Note: The DS3500/DS3950/DS4000/DS5000 storage subsystems shipped from the factory may have NVSRAM versions installed with a different first character prefix. Both manufacturing NVSRAM version and the "N" prefixed NVSRAM version are the same. You do not need to update your DS3950/DS4000/DS5000 storage subsystem with the "N" prefixed NVSRAM version as stated above. For example, the N1815D480R923V08 and M1815D480R923V08 (or C1815D480R923V08 or D1815D480R923V08 or ...) versions are the same. Both versions share the same "1815D480R923V08" string value. Refer to the following IBM System Storage™ Disk Storage Systems Technical Support web site for the latest released code levels. http://www.ibm.com/systems/support/storage/disk ======================================================================= 2.0 Installation and Setup Instructions ----------------------------------------- Note: Before commencing with any firmware upgrade procedure, use the Storage Manager client to perform a Collect All Support Data capture. Save this support data capture on a system other than the one that is being upgraded. The sequence for updating your Storage Subsystem firmware may be different depending on whether you are updating an existing configuration, installing a new configuration, or adding drive expansion enclosures. ATTENTION: If you have not already done so, please check the Dependencies section for ANY MINIMUM FIRMWARE REQUIREMENTS for the storage server controllers, the drive expansion enclosure ESMs and the hard drives in the configurations. The order in which you will need to upgrade firmware levels can differ based on prerequisites or limitations specified in these readme files. If no prerequisites, or limitations, are specified, the upgrade order refered to in section 2.1 Installation For an Existing Configuration should be observed. If drive firmware upgrades are required, down time will need to be scheduled. The drive firmware upgrades require that there are no Host I/Os sent to the storage controllers during the download. Note: For additional setup instructions, Refer to the Installation, User's and Maintenance Guide of your storage subsystem or storage expansion enclosures. 2.1 Installation For an Existing Configuration ------------------------------------------------- 1. Upgrade the storage manager client program to the storage manager 10.77 or later version that is available from the IBM system storage support web site. Older versions of the storage manager client program will show that the new firmware file is not compatible with the to-be- upgraded subsystem, even when the existing version of the controller firmware installed in the DS storage subsystem is of 7.7x.xx.xx code thread. http://www.ibm.com/systems/support/storage/disk 2. Download controller firmware and NVSRAM. Please refer to the IBM DS Storage Manager version 10 Installation and Host Support Guide for additional information. Important: It is possible to download the controller firmware and NVSRAM at the same time by selecting the option check box in the controller firmware download window. However if you have made any setting changes to the host parameters, downloading NVSRAM will overwrite those changes. These modifications must be reapplied after loading the new NVSRAM file. You may need to update firmware and NVSRAM during a maintenance window. To download controller firmware and NVSRAM using the DS Storage Manager application, do the following: a. Open the Subsystem Management window. b. Click Advanced => Maintenance => Download => Controller Firmware Follow the online instructions. c. Reapply any modifications to the NVSRAM. Both controllers must be rebooted to activate the new NVSRAM settings. To download controller NVSRAM separately, do the following: a. Open the Subsystem Management window. b. Click Advanced => Maintenance => Download => Controller => Controller NVSRAM. Follow the online instructions. c. Reapply and modifications to the NVSRAM. Both controllers must be rebooted to activate the new NVSRAM settings. 3. Update the firmware of the ESMs in the attached drive expansion enclosures to latest levels. (See 1.4 Dependencies) To download drive expansion ESM firmware, do the following: a. Open the Subsystem Management window. b. Click Advanced => Maintenance => Download => Environmental(ESM) Card Firmware. Follow the online instructions. Note: The drive expansion enclosure ESM firmware can be updated online with no downtime if both ESMs in each of the drive expansion enclosures are functional and one (and ONLY one) drive expansion enclosure is selected in the ESM firmware download window for ESM firmware updating at a time. Note: SAN Volume Controller (SVC) customers are now allowed to download ESM firmware with concurrent I/O to the disk subsystem with the following restrictions. 1) The ESM Firmware upgrade must be done on one disk expansion enclosure at a time. 2) A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager Application's "Recovery Guru" that the DS5000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved. Note: Reference IBM System Storage™ Disk Storage Systems Technical Support web site for the current ESM firmware versions for the drive expansion enclosures. 4. Make any hard disk drive firmware updates as required. FAILURE to observe the minimum firmware requirement might cause your storage subsystem to be OUT-OF-SERVICE. Check the IBM System Storage™ Disk Storage Systems Technical Support web site for the latest released hard drive firmware if you have not already upgraded drive firmware to the latest supported version. (See 1.4 Dependencies) To download hard disk drive firmware, do the following: a. Schedule down time because the drive firmware upgrades require that there are no HOST I/Os to be sent to the DS5000 controllers. b. Open the Subsystem Management window. c. Click Advanced => Maintenance => Download => Drive Firmware. Follow the online instructions. Note: with controller firmware version 06.1x.xx.xx or later, multiple drives from up to four different drive types can be updated at the same time. ======================================================================= 3.0 Configuration Information ----------------------------- 3.1 Configuration Settings -------------------------- 1. By default the IBM DS Storage Manager 10.77 or later does not automatically map logical drives if storage partitioning feature is enabled. This means that the logical drives are not automatically presented to host systems. For a new installation, after creating new arrays and logical drives; a. If your host type is not Windows, create a partition with your host type and map the logical drives to this partition. b. If your host type is Windows, you can map your logical drives to the "Default Host Group" or create a partition with a Windows host type. When upgrading from previous versions of IBM DS Storage Manager to version 10.70 or later. a. If upgrading with no partitions created and you have an operating system other than Windows, you will need to create a partition with your host type and map the logical drives from the "Default Host Group" to this partition. b. If upgrading with Storage Partitions and an operating system other than Windows is accessing the default host group, you will need to change the default host type. After upgrading the NVSRAM, the default host type is reset to Windows Server 2003/2008 non-clustered for DS storage server with controller firmware version 06.14.xx.xx or later. For DS4000 storage server with controller firmware version 06.12.xx.xx or earlier, it is reset to Windows non-clustered (SP5 or later), instead. Refer to the IBM DS Storage Manager online help to learn more about creating storage partitions and changing host types. 2. Running script files for specific configurations. Apply the appropriate scripts to your subsystem based on the instructions you have read in the publications or any instructions in the operating system readme file. A description of each script is shown below. - SameWWN.scr: Setup RAID controllers to have the same World Wide Names. The World Wide Names (node) will be the same for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - DifferentWWN.scr: Setup RAID controllers to have different World Wide Names. The World Wide Names (node) will be different for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - EnableAVT_W2K_S2003_noncluster.scr: The script will enable automatic logical drive transfer (AVT/ADT) for the Windows 2000/Server 2003 non- cluster heterogeneous host region. The default setting is to disable AVT for this heterogeneous host region. This setting is one of the requirements for setting up the remote boot or SAN-boot. Do not use this script unless it is specifically mentioned in the applicable instructions. (This script can be used for other host type if modifications are made in the script, replacing the Windows 2000/Server 2003 non-cluster host type with the appropriate host type that needs to have AVT/ADT enabled) - DisableAVT_W2K_S2003_noncluster.scr: The script will disable the automatic logical drive transfer (AVT) for the Windows 2000/Server 2003 non-cluster heterogenous host region. This script will reset the Windows 2000/Server 2003 non-cluster AVT setting to the default. (This script can be used for other host type if modifications are made in the script, replacing the Windows 2000/Server 2003 non-cluster host type with the appropriate host type that needs to have AVT/ADT disabled) - EnableAVT_Linux.scr: The script will enable automatic logical drive transfer (AVT) for the Linux heterogeneous host region. Do not use this script unless it is specifically mentioned in the applicable instructions. - DisableAVT_Linux.scr: The script will disable the automatic logical drive transfer (AVT) for the Linux heterogeneous host region. Do not use this script unless it is specifically mentioned in the applicable instructions. - EnableAVT_Netware.script: The script will enable automatic logical drive transfer (AVT) for the NetWare Failover heterogeneous host region. Do not use this script unless it is specifically mentioned in the applicable instructions. - DisableAVT_Netware.script: The script will disable the automatic logical drive transfer (AVT) for the NetWare Failover heterogeneous host region. Do not use this script unless it is specifically mentioned in the applicable instructions. - disable_ignoreAVT8192_HPUX.script: This script will disable the DS4000 storage subsystem ignoring of AVT requests for the HP-UX server specific read pattern of 2 blocks at LBA 8192. The AVT ignoring request for the LBA 8192 reads was implemented to prevent a possible occurrence of an AVT storm caused by the HP-UX server probing in the wrong order of available paths to the volume(s) when it detect server to LUN path failure. Use this script only when you do not have defined LVM mirrored volumes using the mapped logical drives from the DS4000 storage subsystems. Please contact IBM support for additional information, if required. - enable_ignoreAVT8192_HPUX.script: This script will enable the DS4000 storage subsystem ignoring of AVT requests for the HP-UX server specific read pattern of 2 blocks at LBA 8192. The AVT ignoring request for the LBA 8192 reads was implemented to prevent a possible occurrence of an AVT storm caused by the HP-UX server probing in the wrong order of available paths to the volume(s) when it detect server to LUN path failure. Use this script only when you do have defined LVM mirrored volumes using the mapped logical drives from the DS4000 storage subsystems. Please contact IBM support for additional information, if required. 3.2 Unsupported configurations ------------------------------ The configurations that are currently not being supported with controller firmware version 07.77.xx.xx in conjunction with DS Storage Manager version 10.77 are listed below: 1. Any DS4200, DS4700, DS4800, DS4500, DSDS4400, DS4300, DS4100, FAStT500 and FAStT200 storage subsystems configurations are not supported with this version of controller firmware. 2. The EXP520 Expansion enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5020. EXP810 drive enclosures are also supported in the DS5020 with the purchase of a premium feature key. 3. The IBM EXP395 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS3950. EXP810 drive enclosures are also supported in the DS3950 with the purchase of a premium feature key. 4. The IBM EXP5000 Expansion Enclosure is not supported attached to any other IBM DS Storage Subsystems except the DS5100 and DS5300. EXP810 drive enclosures are also supported with the DS5100 and DS5300 once the RPQ is submitted and approved. 5. Fibre Channel loop environments with the IBM Fibre Channel Hub, machine type 3523 and 3534, in conjunction with the IBM Fibre Channel Switch, machine types 2109-S16, 2109-F16 or 2109-S08. In this configuration, the hub is connected between the switch and the IBM Fibre Channel RAID Controllers. 6. The IBM Fibre Channel Hub, machine type 3523, connected to IBM machine type 1722, 1724, 1742, 1815, 1814, 3542 and 3552. 7. A configuration in which a server with only one FC host bus adapter connects directly to any storage subsystem with dual controllers is not supported. The supported configuration is the one in which the server with only one FC host bus adapter connects to both controller ports of any DS storage subsystem with dual controllers via Fibre Channel (FC) switch (SAN-attached configuration.) 3.3 Helpful Hints ------------------ 1. Depending on the storage subsystem that you have purchased, you may have to purchase the storage partitioning premium feature option or an option to upgrade the number of supported partitions in a storage subsystem. Please see IBM Marketing representatives or IBM resellers for more information. 2. When making serial connections to the DS storage controller, the baud rate is recommended to be set at 38400. Note: Do not make any connections to the DS3950/DS4000/DS5000 storage subsystem serial ports unless it is instructed by IBM Support. Incorrect use of the serial port might result in lost of configuration, and possibly, data. 3. All enclosures (including storage subsystems with internal drive slots) on any given drive loop/channel should have complete unique IDs, especially the single digit (x1) portion of the ID, assigned to them. For example, in a maximum configured DS4500 storage subsystem, enclosures on one redundant drive loop should be assigned with ids 10- 17 and enclosures on the second drive loop should be assigned with ids 20-27. Enclosure ids with the same single digit such as 11, 21 and 31 should not be used on the same drive loop/channel. The DS3950, DS5020, DS4200 and DS4700 storage subsystems and the EXP395, EXP520, EXP420, EXP810, and EXP5000 storage expansion enclosures do not have mechanical ID switches. These storage subsystems and storage expansion enclosures automatically set the Enclosure IDs. IBM recommendation is not make any changes to these settings unless the automatic enclosure ID settings resulting in non- unique single digit settings for enclosures (including the storage subsystems with internal drive slots) in a given drive loop/channel. 4. The ideal configuration for SATA drives is one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously, some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes. 5. Starting with the DS4000 Storage Manager (SM) host software version 9.12 or later, the Storage Manager client script window looks for the files with the file type of ".script" as the possible script command files. In the previous versions of the DS4000 Storage Manager host software, the script window looks for the file type ".scr" instead. (i.e. enableAVT.script for 9.12 or later vs. enableAVT.scr for pre-9.12) 6. Inter-operability with tape devices is supported on separate HBA and switch zones. ======================================================================= 4.0 Unattended Mode --------------------- N/A ======================================================================= 5.0 WEB Sites and Support Phone Number ---------------------------------------- 5.1 IBM System Storage™ Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 5.2 IBM System Storage™ Marketing Web Site: http://www.ibm.com/systems/storage/disk 5.3 IBM System Storage™ Interoperation Center (SSIC) web site: http://www.ibm.com/systems/support/storage/ssic/ 5.4 You can receive hardware service through IBM Services or through your IBM reseller, if your reseller is authorized by IBM to provide warranty service. See http://www.ibm.com/planetwide/ for support telephone numbers, or in the U.S. and Canada, call 1-800-IBM-SERV (1-800-426- 7378). IMPORTANT: You should download the latest version of the DS Storage Manager host software, the storage subsystem controller firmware, the drive expansion enclosure ESM firmware and the drive firmware at the time of the initial installation and when product updates become available. For more information about how to register for support notifications, see the following IBM Support Web page: www.ibm.com/systems/support/storage/subscribe/moreinfo.html You can also check the Stay Informed section of the IBM Disk Support Web site, at the following address: www.ibm.com/systems/support/storage/disk ======================================================================= 6.0 Trademarks and Notices ---------------------------- 6.1 The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM DS4000 DS5000 FAStT System Storage the e-business logo xSeries pSeries HelpCenter Microsoft Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds. Other company, product, and service names may be trademarks or service marks of others. ======================================================================= 7.0 Disclaimer ---------------- 7.1 THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. 7.2 Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corporation.