IBM System Storage DS Storage Manager version 10.86.x5.43 for VMware ESX Server 3.5 U5 (P22), 4.1 U1 and U2, 5.0, 5.0 U1 and 5.1 Note: VMware ESX Server 2.0 is supported by the DS4000 Storage Manager version 8.x only. In addition, VMware ESX Server 2.1 is supported by the DS4000 Storage Manager version 9.1 only. VMware ESX Server 2.1 is not supported with DS4000 storage subsystems having version 06.12.xx.xx and higher controller firmware. VMWare ESX Server host attachment to the DS4000/DS5000 storage subsystems requires the additional purchase of the IBM DS4000/DS5000 VMWare Host Kit Option or Feature Code. The IBM VMWare Host Kit Option contains the required IBM licensing to attach a VMWare ESX Server host to the DS4000/DS5000 storage subsystems. Please contact your IBM service representatives or IBM resellers for purchasing information. Important: A problem causing recursive reboots exists while using 7.36.08 and 7.36.12 firmware on IBM System Storage DS4000 or DS5000 systems. This problem is fixed in 7.36.14.xx and above firmware. All subsystems currently using 7.36.08 and 7.36.12 firmware MUST run a file system check tool (DbFix) before and after the firmware upgrade to 7.36.14.xx or higher. Instructions for obtaining and using DbFix are contained in the 7.36.14.xx or higher firmware package. Carefully read the firmware readme and the DbFix instructions before upgrading to firmware 7.36.14.xx or higher. For subsystems with firmware level 7.36.08 or 7.36.12, configuration changes should be avoided until a firmware upgrade to 7.36.14.xx or higher has been completed successfully. Subsystems not currently using 7.36.08 or 7.36.12 do not need to run DbFix prior to upgrading to 7.36.14.xx or higher. DbFix may be run after upgrading to 7.36.14.xx or higher, but it is not required. DbFix is only applicable to subsystems using 7.36.xx.xx or greater firmware. If problems are experienced using DbFix or the resulting message received is Check Failed, DO NOT upgrade your firmware and contact IBM support before taking any further actions. (C) Copyright International Business Machines Corporation 1999, 2013. All rights reserved. US Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Note: Before using this information and the product it supports, read the general information in "Notices and trademarks"in this document. Important: 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS3000/DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation. (This can be the same workstation that you use for the browser-based VMware Management Interface.) The SMclient program is included in the IBM System Storage DS Storage Manager version 10 for the 32 and 64 bit version of Linux operating systems or the IBM System Storage DS Storage Manager version 10 for Microsoft Windows operating systems host software packages. 2. The storage manager client software packages for Windows and Linux contain non-IBM code (Open Source code.) Please review and agree to the Non- IBM Licenses and Notices terms stated in the DS Storage Manager Non_IBM_Licenses_and_Notices.v3.pdf file before use. This pdf file is packaged with the appropriate storage manager client software package. Refer to the IBM System Storage Support Web Site or CD for the IBM System Storage DS Storage Manager Version 10 Installation and Host Support Guide. This guide along with the DS Storage Manager program Online Help provide the installation and support information. Last Update: 12/02/2013 Products Supported ----------------------------------------------------------------- | New Model | Old Model | Machine Type | Model | |----------- |-----------|--------------|-------------------------| | DS5300 | N/A | 1818 | 53A | |----------- |-----------|--------------|-------------------------| | DS5100 | N/A | 1818 | 51A | |----------- |-----------|--------------|-------------------------| | DS5020 | N/A | 1814 | 20A | |----------- |-----------|--------------|-------------------------| | DS4800 | N/A | 1815 | 82A, 82H, 84A, 84H, | | | | | 88A, 88H, 80A, 80H | |----------- |-----------|--------------|-------------------------| | DS4700 | N/A | 1814 | 70A, 70H, 72A, 72H, | | | | | 70T, 70S, 72T, 72S, | |----------- |-----------|--------------|-------------------------| | DS4500 | FAStT 900 | 1742 | 90X, 90U | |----------- |-----------|--------------|-------------------------| | DS4400 | FAStT 700 | 1742 | 1RX, 1RU | |----------- |-----------|--------------|-------------------------| | DS4300 | FAStT 600 | 1722 | 60X, 60U, 60J, 60K | | | | | 60L | |----------- |-----------|--------------|-------------------------| | DS4200 | N/A | 1814 | 7VA, 7VH | |----------- |-----------|--------------|-------------------------| | DS4100 | FAStT 100 | 1724 | 100, 1SC | |----------- |-----------|--------------|-------------------------| | DS3950 | N/A | 1814 | 94H, 98H, | |------------|-----------|--------------|-------------------------| | DCS3700 | N/A | 1818 | 80C | |----------- |-----------|--------------|-------------------------| | DS3500 | N/A | 1746 | C2A, A2S, A2D, C4A, | | | | | A4S, A4D | |----------- |-----------|--------------|-------------------------| | DS3200 | N/A | 1726 | 21X, 22X, 22T, HC2, HC6 | |----------- |-----------|--------------|-------------------------| | DS3300 | N/A | 1726 | 31X, 32X, 32T, HC3, HC7 | |----------- |-----------|--------------|-------------------------| | DS3400 | N/A | 1726 | 41X, 42X, 42T, HC4, HC8 | ----------------------------------------------------------------- IBM System Storage DS4000 Manager version 9.1x and later for VMware does not support the FAStT500 and FAStT200 machine types 3552 and 2542) with ESX Server 2.1 or ESX Server 2.5 or higher. Please refer to the System Storage Interoperation Center (SSIC) for the latest VMware ESX Server interoperability at the following web site: http://www-03.ibm.com/systems/support/storage/config/ssic/ CONTENTS -------- 1.0 Overview 2.0 Installation and Setup Instructions 3.0 Helpful Configuration Tips 4.0 WEB Sites and Support Phone Number 5.0 Trademarks and Notices 6.0 Disclaimer ======================================================================== 1.0 Overview ------------------ 1.1 Overview ------------------- There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environment. To manage the DS3000/DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the DS4000/DS5000 storage subsystem via out-of-band management method. (This can be the same workstation that you use for the browser- based VMware Management Interface.) The SMclient program is included in the IBM System Storage DS Storage Manager version 10.86 for the 32 and 64 bit version of Linux operating systems or the IBM System Storage DS Storage Manager version 10.86 for Microsoft Windows operating systems host software packages. The usage permission of the SMclient program in the IBM System Storage DS Storage Manager version 10.86 host software packages for Windows and Linux does not give the user the entitlement to the Windows or Linux host IO privileges. In those DS4000/DS5000 storage subsystems that require the Windows and Linux host entitlement kits/options, these kits/options must be purchased before these servers can be IO attached to the DS4000/DS5000 storage subsystems. The IBM Storage Manager host software version 10.8x new features and changes are described in the corresponding Change History document. Please refer to this document for more information on new features and modifications. ======================================================================= 1.2 Limitations --------------- IMPORTANT: The listed limitations are cumulative. However, they are listed by DS3000/DS4000/DS5000 storage subsystem controller firmware and Storage Manager host software releases to indicate which controller firmware and Storage Manager host software release that they were first seen and documented. No new limitations with Storage Manager version 10.86.x5.43 release (controller firmware 07.86.xx.xx) No new limitations with Storage Manager version 10.86.xx05.0035 release (controller firmware 07.86.xx.xx) New limitations with Storage Manager version 10.86.xx05.0028 release (controller firmware 07.86.xx.xx) 1. During the GUI mode of the installation on SLES 11 SP2, need to use mouse to select the options instead of the keyboard enter key on SLES 11 SP2 during the installation or run the installation in console mode.(LSIP200317279) 2. JAWs is only able to read out the title of the Window on which the initial focus is, it is not reading out any content inside that Window.(LSIP200327858) 3. When using OLH with JAWS screen reader, will have difficulty on navigating through the content in the Index Tab under Help content window due to incorrect and duplicate reading of the text. Please use the search/find tab in the OLH. (LSIP200331090) 4. When using the accessibility software JAWS 11 or 13, may hear the screen reading of a background window, even if the dialog is not in focus. Please use the INSERT+B key to get the reading reinitiated for the dialog in focus.(LSIP200329868) 5. Will not be able to find Tray tab on storage array profile dialog launch. User needs to navigate to Hardware tab to find Tray tab.(LSIP200332950) 6. Will not be able to perform multiple array upgrades having different firmware versions. Please choose to upgrade arrays having different firmware versions separately.(LSIP200335962) 7. May hit IO error if all the paths are lost due delayed uevent, Please always make sure to run 'multipath -v0' command to rediscover the returning path. This will prevent the host from encountering an IO error due to the host losing all paths should the alternate path fails. (LSIP200347725) 8. Volume copy pair itself would get removed, if user attempts to create a shadow copy of a source volume of a volume copy pair using VSS provider. There is no data loss as shadow copy creation is only allowed after 100% copy for volume copy is completed. User needs to manually re-establish the volume copy relation. This would result in re-copy of data from source volume to destination volume. (LSIP200352122) 9. There may be some confusion when the data is compared between summary tab and storage array profile. There is no actual functionality issue. The way the contents are labelled is not consistent.(LSIP200354832) New limitations with Storage Manager version 10.84.xx.30 release (controller firmware 07.84.xx.xx). 1. On a more than 30 drive disk pool with T10-PI enabled, the reconstruction progress indicator never moves because a finish notification is not sent to Storage Manager. When seeing this issue, just putting the replacement drive in the storage subsystem will allow the reconstruction to resume. (LSIP200298553) No new limitations with Storage Manager version 10.83.xx.23 release (controller firmware 07.83.xx.xx). New limitations with Storage Manager version 10.83.xx.18 release (controller firmware 07.83.xx.xx). 1. ALUA is supported with ESX server versions 5.0 U1 and later. In these ESX server versions, the multipath driver claim rules are configured correctly to handle the ALUA functionality of controller firmware version 7.83.xx.xx and later. 2. A non-T10PI logical drive can not be "volumecopied" to a logical drive with T10 PI functionality enabled. A non-T10 PI logical drive can only be volumeCopied to a non-T10 PI logical drive.(LSIP200263988) 3. Initiating dynamic logical drive expansion (DVE) on logical drives that are part of an asynchronous enhanced remote mirroring (ERM) without write order consistency will result in an error that is misleading because the controller is sending incorrect return code. DVE can only perform on logical drives having asynchronous ERM with write order consistency or synchronous ERM relationship.(LSIP200287980) 4. Cache settings can not be updated on thin logical drives. They can only be updated on the repository drives that are associated with these thin logical drives.(LSIP200288041) 5. Wait at least two minutes after canceling a logical drive creation in the disk pool before deleting any logical drives just created in the disk pool from the cancelled logical drive creation process. (LSIP200294588) 6. T10PI errors were incorrectly reported in the MEL log for T10PI- enabled logical drives that are participated in an enhanced remote mirroring relationship during the logical drive initializations.(LSIP200296754) 7. Having more than 20 SSD drives in a storage subsystem will result in SSD premium feature "Out-of-Compliance" error. One has to remove the extra SSD drives to bring the total number of SSDs in the storage subsystem to 20 or less.(LSIP200165276) 8. Pressing shift + F10 does not display the shortcut or context menu for the active object in the Storage Manger host software windows. The work-around is to use right-click on the mouse or the Windows key on the keyboard. (LSIP200244269) 9. Storage Manager subsystem amangement window performance might be slow due to Java runtime memory leak. The work around is to close the Storage Manager client program after the management tasks are completed. (LISP200198341) 10. VAAI Clone command could fail if it coincides with a controller reboot or failover event. There is not a work-around.(LSIP200260946) New limitations with Storage Manager version 10.77.xx.28 release (controller firmware 07.77.xx.xx). 1. Only one EXP5060 drive slot with a 3 TB SATA drive inserted can have the ATA translator firmware updated at any one time if the inserted 3 TB drive has "Incompatible" status. Simultaneously updating ATA translator firmware on multiple 3 TB drives having an "Incompatible" status might result in an "interrupted" firmware download state that requires the power- cycling of the subsystem to clear it. This limitation is reduced with the Storage Manager version 10.83 and later. The timeout was increased to accommodate simultaneously updating ATA translator firmware on up to five 3 TB SATA drives having an "Incompatible". 2. The 3 TB NL SAS drive can only be used to create non-T10PI arrays and logical drives. However, in certain conditions, where there are not any available hot-spare drives for T10PI enabled arrays, it will be used by the controller as a hot-spare for a failed drive in these T10PI enabled arrays. The controller will operate properly in this scenario and there are no adverse affects to the system while the 3 TB NL SAS drive is being used as the hot spare in a T10PI enabled array. This limitation is removed with controller firmware version 7.83.xx.xx and later. Please upgrade the controller firmware version 7.83.xx.xx and later if there is need to create a T10 PI-enabled arrays using 3TB NL SAS drives. New limitations with Storage Manager version 10.77.xx.16 release (controller firmware 07.77.xx.xx). 1. When you switch between the Search tab and the TOC tab, the topic that was open in the Search tab is still open in the TOC tab. Any search terms that were highlighted in the Search tab are still highlighted in the TOC tab. This always happens when you switch between the Search tab and the TOC tab. Workaround: Select another topic in the TOC to remove all search term highlighting from all the topics. 2. Support tab in the AMW kept open for longer duration will result in display of unevenly spaced horizontal marks. If the storage Manager is kept open for long durations a few grey lines may be seen on the Support Tab after restoring the AMW. Re-launching the AMW eliminates this problem. 3. The storage subsystem configuration script contains the logical drive creation commands with incorrect T10PI parameter. The workaround is to manually edit the file to change instances of dataAssurance parameters to T10PI. 4. The DS3500 subsysem does not support External Key Management at this time. Please contact IBM resellers or represeantives for such support in the future. New limitations with Storage Manager version 10.70.xx.25 release (controller firmware 07.70.xx.xx). 1. Kernel Panic reported on Linux SLES10 SP3 during controller failover. Probability of this issue happening in the field is low. Workaround: Avoid placing controllers online/offline frequently. Novell was informed about this issue. Opened Novell Bugzilla NVBZ589196. 2. CHECK CONDITION b/4e/0 returned during ESM download on DS5000 The interposer running LP1160 firmware is reporting an Overlapped Command (0B/4E/00) for a Read Command containing an OXID that was very recently used for a previous read command on the same loop to the same ALPA. In almost all cases, this command will be re-driven successfully by the controller. Impact should be negligible. Workaround: Can be avoided by halting Volume I/O prior to downloading ESM firmware. 3. If a host sees RAID volumes from the same RAID module that are discovered through different interface protocols (fibre/SAS/iSCSI) failovers will not occur properly and hosts IOs will error out. If the user does not map volumes to a host that can be seen through different host interfaces this problem will not occur. Workaround: Place controller online and reboot the server to see all volumes again. 4. Linux guest OS reported I/O error during Controller Firmware Upgrade on ESX4.1 with Qlogic HBA. The user will see I/O errors during controller activation from the guest OS due to mapped devices becoming inaccessible and offline. Workaround: This issue occurs on various Linux guest OSes, so to avoid the issue the user should perform an offline (no I/O to controllers) controller firmware upgrade. 5. I/O Error on Linux RH 4.8 after rebooting controller on DS3500 iSCSI. The devices will be disconnected from the host until the iSCSI sessions are re-established. workaround: Restart the iSCSI service. Configure the iSCSI service to start automatically on boot. 6. CFW Upgrade on 4.1 w/SLES 11 fails with IO error on SLES 11 Guest partition. This issue occurred if there is filesystem volume in SLES11 VM. User will see I/O errors and Filesystem volumes in SLES11 VMs will be changed to read-only mode. Workaround: User can perform controller FW upgrade with either no I/O running on SLES11 VM or no Filesystem created in SLES11 VMs. 7. "gnome-main-menu" crashes unexpectedly while install HSW on Suse 10. The crashes appear to happen randomly with other applications as well.?After the crash the menu reloads automatically.?Dismiss the prompt and the host will reload the application.?This problem appears to be a vendor issue. 8. I/O errors on RHEL 4.8 guests on VMware 4.1 during controller reset. VMware has suggested VMware 4.1 P01 may resolve this issue. No support was issued for RHEL 4.8 guests under VMware 4.1 over SAS. Workaround: Use RHEL 5.5. 9. VMWare guest OS not accessible on iSCSI DS3524 (VMware SR 1544798051, VMware PR 582256). VMware has suggested VMware 4.1 P01 may resolve this issue. Workaround: Use Fibre Channel or SAS connectivity. 10. When DMMP is running in a BladeCenter SAS environment, I/O errors occur during failover and controller firmware operations. Support for Device Mapper and SLES 11.1 SAS has been restricted and will not be published. Workaround: Install RDAC. 11. SLES11.1ppc SAN boot fails on PS700 with 10 Gb Qlogic ethernet to DS3512. After configuration and install of Linux on a LUN using software iSCSI, the JS blade will not boot into the OS. Workaround: Use local boot, SAS or Fibre Channel. 12. ‘No response?messages from Device Mapper devices with volumes on non-preferred path. Any I/O operations against volumes mapped to failed devices will timeout or hang. Workaround: This problem requires the host to be rebooted in order to restart I/O successfully. Update SLES11.1 to maintenance update 20101008 containing kernel version 2.6.32.23-0.3.1. Note that this kernel version has not been fully certified by LSI and should only be used if this issue is encountered. Bugzilla #650593 contains issue details and fix provided by Novell. 13. LifeKeeper 7.2.0 recovery kits require multiple host ports to use SCSI reservations. Workaround: Use two single port SAS HBAs. In this case each port will be represented as a host, and LifeKeeper will identify both separately. Another way to avoid the issue is to use MPP as the failover driver. 14. Unexpected "jexec" messages during mpp installation/un-installation on SLES 11.1. Workaround: None - https://bugzilla.novell.com/show_bug.cgi?id=651156 15. I/O write errors on mounted LifeKeeper volumes before node resources transfers to another node. recovery: I/O can be restarted as soon as node resources are transferred to another node in the cluster. Workaround: None. 16. Solaris 10u8 guest OS disks become inaccessible on ESX35u5 during sysReboot on array controller. This can be avoided by not using raw device mappings and instead using virtual disks. If raw device mappings are required, they must be used with virtual compatability mode selected when adding the disks to the guest OS. Workaround: None. 17. Solaris 10u8 guest OSes reported I/O error during Controller Firmware Upgrade on ESX35U5 with Qlogic HBA. Permanent restriction. Workaround: Perform upgrade with no I/O. 18. Solaris guest OS reported I/O error during Controller reset on ESX35U5. The only recovery method is to rebooting the failed VM host. Permanent restriction. New limitations with Storage Manager version 10.70.xx.10 release (controller firmware 07.70.xx.xx). 1. Storage Manager “Error 1000 - Could not communicate with the storage...." can occur when actions such as clearing the configuration on a large system (i.e. 448 drives with a DS5300). The Storage Manager has a 120 second timeout and retries 2 more times when retrieving status. Some actions such as this may take 8 minutes or longer. 2. Array Management Window shows 2048 volume copies allowed when only 2047 maximum volume copies can be created. For a given array, at least one source volume will be present for the volume copy feature; hence, the actual number of “copy relationships?is one less than the maximum volume allowed. 3. FDE Drive in “Security Locked?state is reported as “incompatible?state in the drive profile. The drive can be recovered by importing the correct lock key. 4. Deselecting product components using keyboard when installing Storage Manager still installs them. You will have to use mouse for individual component selection. Limitations with Storage Manager version 10.60.xx.17 release (controller firmware 07.60.xx.xx). 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. 2. VMware is only supported with fibre channel host attach, iSCSI is not supported at this time. New limitations with Storage Manager version 10.60.xx.11 release (controller firmware 07.60.xx.xx). 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. 2. VMware is only supported with fibre channel host attach, iSCSI is not supported at this time. New limitations with Storage Manager version 10.60.xx.05 release (controller firmware 07.60.xx.xx). 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. 2. VMware is only supported with fibre channel host attach, iSCSI is not supported at this time. New limitations with Storage Manager Installer (SMIA) package version 10.50.xx.23 release. 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. 2. Concurrent controller firmware download is not supported in storage subsystem environment with VMware ESX server host attached. 3. Unable to boot ESX 3.0.3 from SAN using IBM system X3950 server with BIOS version 1.06. New limitations with Storage Manager Installer (SMIA) package version 10.50.xx.19 release. 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. 2. Concurrent controller firmware download is not supported in storage subsystem environment with VMware ESX server host attached. New limitations with Storage Manager Installer (SMIA) package version 10.36.xx.13 release. 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. Limitations with Storage Manager Installer (SMIA) package version 10.30.xx.09 release. 1. There is not an IBM System Storage DS Storage Manager host software packages for the VMWare ESX server operating system environments. To manage the DS4000/DS5000 storage subsystems that are IO attached to your VMware ESX Server hosts, you must install the DS Storage Manager client software (SMclient) on a Windows or Linux management workstation and manage the subsystem out-of-band. Please review the Operating System limitations specific to Windows and Linux. In addition, review the controller firmware readme for limitations specific to the storage subsystem. 2. The DS5300 and DS5100 are supported on VMware ESX v3.5 only. Limitations with Storage Manager Installer (SMIA) package version 10.15.xx.08 release. 1. Reconfiguration operation (DRM) is delayed under some circumstances. When a drive fails before the DRM completes, it can take up to eight times as long for the DRM to complete. DRM reconfigurations to RAID 6 have the longest impact since there are four times as many calculations and writes that have to occur compared to other RAID levels. 2. Host Software display of controller current rate is wrong below 4Gbps. Host Software Client - AMW - Logical/Physical View - Controller Properties - Host Interfaces - Current rate is displaying "Not Available" when the controller negotiated speed is 2Gbps. When the controller negotiated speed is reduced to 1GBs, the Current rate displays "2Gbps" New limitations with Storage Manager Installer (SMIA) package version 10.10.xx.xx release (controller firmware 07.10.xx.xx). 1. Controller Alarm Bell icon does not appear as a flashing icon indicator on the screen to get the user's attention but the icon does change appearances. 2. Miswire of drive tray cabling with DS4700 and DS4200 can cause continuous reboot of a controller. To correct this situation, power down the subsytem, cable the drive trays correctly, and power the subsystem back up. 3. Controller button may appeared enabled and mislead the user that a controller is selected where in fact, a controller was not highlighted for the botton to appear ready. 4. Search key is not marked correctly in that page due a JavaHelp Bug with JavaHelp 2.0_01. A search for keyword "profile" ended with phase "prese 'nting p' rofile" being marked. 5. storageArrayProfile.txt should be renamed as storageSubsystemProfile.txt in Support Data. 6. Bullets Incorrectly Placed in Volume Modification Help Page 7. Unable to Escape out of Help Display. User will be required to close the window by using window close procedure (exit, etc.) 8. Bullets and Descriptions not alligned into same line in "Viewing mirror properties". 9. The Help window is not getting refreshed properly when using the AMW.Help window. Workaround is to close and reopen SANtricity. 10. CLI command failure for creation of volume(s) if capacity parameter syntax is not specified. A space will need to be used between the integer value and the units used in the capacity option of this command or, "create volume volumeGroup[4] capacity=15 GB?. 11. Customer will see high ITW counts displayed in GUI (RLS feature) and logs (files) for diagnostics (support bundle, DDC, etc.) and may be concerned that he has a problem. This will not cause a Critical MEL event. Known problem previously restricted, when a DS4700 or DS4200 controller reboots, these counters increment. 12. Single tray powercycle during IO activity causes drives in tray to become failed. Customer may see loss of drive (failed) due to timing issue of drive detection & spin-up of the drive. There are one of two conditions that result on power-up: - (Most likely) Drive will be marked as optimal/missing with the piece failed, or - (rarely) Drive will be marked as failed with the piece failed. Workaround is to unfail (revive) drive which restarts reconstruction of all pieces. 13. Event log critical Event 6402 was reported after creating 64 mirror relations. Eventually, the mirror state transitions to synchronizing and proceeds to completion on mirror creation. Workaround is to ignore the MEL logging since this occurs on creation of mirror volumes. 14. Reconfiguration operations during host IO may result in IO errors when arrays contain more than 32 LUNs. These operations include Dynamic Capacity Expansion, Defragmentation, Dynamic Volume Expansion, Dynamic RAID Migration. The workaround is to quiesce host IO activity during reconfiguration. 15. Heavy IO to a narrow volume group of SATA drives can result in host IO timeouts. A narrow volume group refers to an array built of very few drives; namely 1 drive RAID 0, 1x1 RAID 1, and 2 + 1 RAID 5. The workaround is to build arrays of SATA drives out of 4 + 1 or greater. 16. When managing the storage subsystem in-band, the upgrade utility will show the upgrade as failed. This is because of the update and reboot of the controllers when activating the new firmware. SMagent is not dynamic and will need to be restarted to reconnect to the storage subsystem. 17. Selecting and dragging text within the storage profile window causes the window to be continuously refreshed. Work around is to select and copy, do not drag the text. 18. When configuring alerts through the task assistant, the option stays open after selecting OK. The window only closes when the cancel button is selected. 19. The Performance Monitor displays error messages when the storage subsystem is experiencing exception conditions. The performance monitor has a lower execution priority within the controller firmware than responding to system IO and can experience internal timeouts under these conditions. 20. Critical MEL event (6402 - Data on mirrored pair unsynchronized) can occur under certain circumstances with synchronous RVM. The most likely scenario is when both primary and secondary are on a remote mirror and an error occurs with access to that host. Resynchronization should occur automatically, when automatic resynchronization is selected for a mirror relationship. However if any of the host sites should go down during this interval, recovery by the user is required. 21. A persistent miswire condition is erroneously reported through the recovery guru even though the subsystem is properly wired. The frequency of occurrence is low and is associated with an ESM firmware download or other reboot of the ESM. The ESM that is reporting the problem must be reseated to eliminate the false reporting. Not all miswire conditions are erroneous and must be evaluated to determine the nature of the error. 22. Drive path loss of redundancy has been reported during ESM download. This occurs when a drive port is bypassed. In some instances this is persistent until the drive is reconstructed. In other cases it can be recovered through an ESM reboot (second ESM download, ESM pull and replace). 23. Unexpected drive states have been observed during power cycle testing due to internal controller firmware contention when flushing MEL events to disk. The drives have been observed as reconstructing or replaced when they should have been reported as failed. Also volume groups have been reported degraded when all drives were assigned and optimal. An indication that this is the situation would be when drive reconstruction has not completed in the expected amount of time and does not appear to be making any progress. The work around is to reboot the controller owning the volume where the reconstruction has stalled. 24. Sometimes when an ESM is inserted a drive's fault line is asserted briefly. The fault line almost immediately returns to inactive, but the ESMs may bypass the drive. In these circumstances, the administrator will have to reconstruct the failed drive. 25. After a drive fail, a manually initiated copyback to a global hot spare may also fail. The work around is to remove the failed drive and reinsert it, then the copyback should resume and complete successfully. 26. When an erronieous miswire condition occurs (as mentioned above in 21), the recovery guru reports the miswire on one controller but not on the other. In this situation, ignore the other controller and use the information supplied by the controller reporting the problem. 27. Occasionally a controller firmware upgrade to 07.10 will unexpectedly reboot a controller an extra time. This could generate a diagnostic data capture, however the firmware upgrade is always successfull. 28. When managing previous releases of firmware (06.19 and prior), "working" gets displayed as "worki" during volume creation. 29. The Performance Monitor error window does not come to the front. You must minimize all other foreground windows to get to the error popup window. 30. When a disk array is in a degraded state, the array will report "needs attention" to both the EMW and the AMW. After taking appropriate corrective action, the AMW view of the array will report "fixing" but the EMW state remains at "needs attention". Both statuses are valid, when the fault state is resolved both views will change to "optimal". 31. Configuring separate Email alerts when two Enterprise management windows are open on the same host will cause the alerts to disappear if one of the Enterprise windows is shut down and then restarted. It is recommended that if multiple Enterprise management windows needs to be open, that they are open on separate Hosts which will indeed allow the configuration of alerts to be saved if one of the enterprise management windows is shut down and restarted. Legacy restrictions that are still applicable: 1. Reflected Fibre Channel (FC) OPN frames occurs when intermixing the EXP810, EXP710 and EXP100s behind DS4700 or DS4800 storage subsystems. This behavior causes excessive drive side timeout, drive side link down and drive side link up events be posted in the DS4000 storage subsystem Event log (MEL.) It might also cause drives to be by-passed or failed by the controller. NEW DRIVE SIDE FC CABLING REQUIREMENT MUST BE ADHERED TO WHEN HAVING EXP100 CONNECTED TO THE DS4700 OR DS4800 STORAGE SUBSYSTEMS. Please refer to the latest version of the Installation, User's and Maintenance Guide for these storage subsystems that are posted in the IBM DS4000 Support web site for more information. http://www.ibm.com/systems/support/storage/disk 2. Can not increase the capacity of RAID arrays. RAID arrays with certain combinations of selected segment size and number of drives that made up the arrays will exceed the available working space in controller dacstore, causing a reconfiguration request (like expanding the capacity of the array) to be denied. These combinations are generally the largest segment size (512KB) with the number of drives in the array is 15 drives or more. There is no work-around. C324144 105008 3. Interoperability problem between the Tachyon DX2 chip in the DS4500 and the DS4300 storage subsystem controllers and the Emulex SOC 422 chip in the EXP810 expansion enclosure ESMs causing up to 5 Fibre Channel loop type errors to be posted in the DS4000 storage subsystem Major Event Log during a 24 hour period. There is a small window in the SOC 422 chip that multiple devices can be opened at one time. This ultimately leads to Fibre Channel loop errors of Fibre Channel link up/down, Drive returned CHECK CONDITION, and Timeout on drive side of controller. IBM recommends the use of the Read-Link-Status function to monitor drive loop for any problems in the drive loop/channel. There is no work-around. 4. The single digit of the Enclosure IDs for all enclosures (including the DS4000 storage subsystem with internal drive slots) in a given redundant drive loop/channel pair must be unique. For example, with four enclosures attached to the DS4300, the correct enclosure ID settings should be x1, x2, x3, and x4 (where x can be any digit that can be set). Examples of incorrect settings would be 11, 21, 31, and 41 or 12, 22, 32, and 62. These examples are incorrect because the x1 digits are the same in all enclosure IDs (either 1 or 2). If you do not set the single digit of the enclosure IDs to be unique among enclosures in a redundant drive loop/channel pair, then drive loop/channel errors might be randomly posted in the DS4000 subsystem Major Event Log (MEL), especially in the cases where the DS4300 storage substems are connected to EXP810s and EXP100s. In additon, enclosure IDs with same single digits in a redundant drive loop/channel pair will cause the DS4000 subsystem controller to assign a soft AL_PA address to devices in the redundant drive loop/channel pair. The problem with soft AL_PA addressing is that AL_PA address assignment can change between LIPs. This possibility increases the difficulty of troubleshooting drive loop problems because it is difficult to ascertain whether the same device with a different address or a different device might be causing a problem. 5. In DS4000 storage subsystem configurations with controller firmware 6.15.2x.xx and higher installed, the performance of write intense workloads such as sequential Tape restores to DS4000 Logical drives with large I/O request sizes (e.g. 256kB) is degraded if the DS4000 logical drives are created with small segment sizes such as 8KB or 16KB. The work around is to create the DS4000 logical drives with segment size of 64KB or higher. 6. Do not pull or insert drives during the drive firmware download. In addition, ALL I/Os must also be stopped during the drive firmware download. Otherwise, drives may be shown as missing, unavailable or failed. 7. Do not perform other storage management tasks, such as creating or deleting logical drives, reconstructing arrays, and so on, while downloading the DS4000 storage subsystem controller firmware and DS4000 EXP ESM firmware. It is recommended that you close all storage management sessions (other than the session that you use to upgrade the firmware) to the DS4000 storage subsystem that you plan to update. ======================================================================= 1.3 Enhancements ------------------------------ The DS Storage Manager version 10.86.x5.43 host software in conjunction with controller firmware version 7.86.32.00 and higher provides support for - Provide the following new functions - Add support for DCS3860 (7.86.36.01) - Provide the fixes for the field defects as shown in the changelist file. For more information, please view the IBM System Storage DS Storage Manager Version 10.8 Installation and Host Support Guide and the IBM System Storage DS Storage Manager Version 10 Copy Services User's Guide. Note: Host type VMWARE has been added to NVSRAM as an additional host type, starting with controller firmware version 7.60.40.00. It is now separated from the Linux Cluster host type, LNXCLVMWARE or LNXCL, which is now rename as LNXCLUSTER. The VMWARE host type also has the "Not Ready" sense data and "Auto Volume Transfer" defined appropriately for VMWare ESX server. + The DS4200 and DS4700 with controller firmware version 7.60.40.xx and later installed will use host index 21. + All other supported systems with controller firmware version 7.60.40 and later installed will use host index 16 instead. Although not required, it is recommended to move to the VMWARE host type instead of continuing using the Linux host type for VMWare hosts since any upgrading of controller firmware and NVSRAM would continue to require running scripts to modify the Linux host type for VMWare hosts; whereas, using the VMWARE host type does not require running scripts. In additon, starting with controller firmware version 7.83.xx.xx, a new VMWare host type, VMWareTPGSALUA, is created for use with storage subsystems having ALUA-enabled controller firmware installed. IMPORTANT: Do not use Storage Manager host software version 10.77.x5.xx and earlier to manage storage subsystems with 3 TB NL SAS and SATA drive option installed. ======================================================================= 1.4 Prerequisites ------------------ N/A ======================================================================= 1.5 Dependencies ----------------- ATTENTION: 1. The 3 TB SATA drive option for EXP5060 expansion enclosure requires ATA firmware version LW1613 and higher. The drive will be shown as "Incompatible" if it is installed in an EXP5060 drive slot with ATA translator firmware version older than LW1613. Please refer to the latest EXP5060 Installation, Users and Maintenance Guide for more information on working with the 3 TB SATA drive option. 2. The Storage Manager host software version 10.83.x5.18 or higher is required for managing storage subsystems with 3 TB NL FC-SAS drives. Storage manager version 10.83.x5.18 or higher in conjunction with controller firmware version 7.83.xx.xx and later allow the creation of T10PI-enabled arrays using 3 TB NL FC-SAS drives. 3. The DS5000 FC-SAS drives must have the FC-SAS interposer firmware version 2268 or later installed. The FC-SAS interposer firmware version 2258 and ealier might report the drive inquiry ID incorrectly which could lead to a data lost in certain sequence of events. The FC-SAS interposer firmware version 2268 is available in the ESM/HDD firmware package version 1.78. 4. The IBM System Storage DS4000 Controller Firmware Upgrade Tool is required to upgrade any system from 6.xx controller firmware to the 7.7x.xx.xx controller firmware. This tool has been integrated into Enterprise Management Window of the DS Storage Manager v10.83 and greater Client. 5. Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the DS4000 drive expansion enclosure ESM, the DS4000 storage subsystem controller and the hard drive firmware. 6. Required installation order for Storage Manager 10.86.xx.xx and controller firmware 07.86.32.00 or later in Unix-type OS environment ie Linux, AIX, HPUX, Solaris: 1. SMruntime - always first 2. SMesm - required by client 3. SMclient 4. SMagent 5. SMutil Note: Steps 1-5 will be done by the SMIA installer if the installation is performed using the SMIA installer. 6. Controller firmware and NVSRAM 7. ESM firmware 8. Drive firmware 7. IN-BAND MANAGEMENT IS NOT SUPPORTED WITH VMWARE ESX SERVER. THERE ARE NOT ANY HOST SOFTWARE PACKAGES FOR THE VMWARE ESX SERVER OPERATING SYSTEMS ENVIRONMENT. THE STORAGE MANAGER 10.83 CLIENT PROGRAM MUST BE INSTALLED IN OUT-OF-BAND MANAGEMENT CONFIGURATIONS, PREFERABLY ON THE SAME SYSTEM THAT IS USED TO MANAGE THE ESX SERVER 2.5 OR HIGHER. THE DS4000 STORAGE MANAGER 10.83 CLIENT PROGRAMS FOR THE FOLLOWING OPERATING SYSTEMS - MICROSOFT WINDOWS 2000 AND WINDOWS SERVER 2003 AND LINUX THAT ARE INCLUDED IN THE CD. PLEASE REFER TO THE FOLLOWING DIRECTORIES IN THE CD FOR THE DS4000 STORAGE MANAGER 10.83 CLIENT PROGRAM THAT IS APPROPRIATE FOR THE OPERATING SYSTEM IN THE COMPUTER THAT IS USED TO MANAGE THE DS4000 STORAGE SERVER - WIN32 (WINDOWS 2000), WS03_32BIT AND WS03_64BIT (WINDOWS SERVER 2003 IA32 AND IA64, RESPECTIVELY) AND LINUX (REDHAT 4 or 5 AND SUSE SLES 9 or 10). IBM DS Storage Manager version 10.83 host software requires the DS3000 /DS4000/DS5000 storage subsystem controller firmware be at version 06.xx.xx.xx or higher. The IBM DS Storage Manager v10.36 supports storage subsystems with controller firmware version 05.3x.xx.xx to 07.36.xx.xx. The IBM DS Storage Manager v10.70 supports storage subsystems with controller firmware version 05.4x.xx.xx to 07.70.xx.xx. ======================================================================= 2.0 Installation and Setup Instructions ----------------------------------------- Please refer to the Readme files in the Linux or Microsoft Windows versions of the IBM DS Storage Manager host software package that is used to manage the IBM DS storage subsystems for more information. ======================================================================= 3.0 Helpful Configuration Tips ------------------------------ 1. In storage subsystem configurations with controller firmware version 7.83.xx.xx installed, ALUA failover method is supported with ESX server 5.0 U1 and later (or 4.1 U3 or later). To verify that the ESX server multipath driver claim rules are configured correctly, perform the following actions: In ESX 4.1 environment, Execute the command -> esxcli nmp satp listrules VMW_SATP_LSI and verify claim rule for VID/PID = IBM 1818 & claim options with the pgs_off flag specified In ESX 5.0 environment, Execute the command -> esxcli storage nmp satp rule list –s VMW_SATP_ALUA and verify claim rule for VID/PID = IBM 1818 & claim options with the ‘tpgs_on?flag specified To determine if the ESX server host is using the ALUA plugin for ALUA-enabled storage subsystems, perform the either of the following commands depending on the ESX server version and look for the value of “VMW_SATP_ALUA for ALUA- enabled enabled storage subsystem and the value of “VMW_SATP_LSI for non- ALUA- enabled storage subsystem. In ESX 4.1 #esxcli nmp device list In ESX 5.0 #esxcli storage nmp device list 2. Storage processors can be configured to return either Unit Attention or Not Ready when quiescent. A DS4000 storage processor that is running Windows as a guest operating system should return Not Ready sense data when it is quiescent. Returning Unit Attention might cause the Windows guest to crash or receive IO errors during a failover. Refer to the VMWare SAN Configuration guide to configure the DS4000 subsystem. 3. Storage Management software should not be installed on the ESX server or within the Virtual Machines. ESX Server has its own built-in failover mechanism and path management is handled through the ESX Server Storage Manager GUI. 4. VMware Tools should be installed on all virtual machines using DS4000 logical drives. 5. The ESX Server Advanced options for SCSI device reset should be configured as follows: Disk.UseDeviceReset=0 (disabled for DS4000 logical drives) Disk.UseLunReset=1 (enabled for DS4000 logical drives) Disk.ResetOnFailover=0 This option should be enabled (changed to 1) when using System/LUN (Raw) disk mappings or when using Microsoft cluster nodes across multiple ESX servers as the Virtual Machine configuration file is edited to use the scsi target address (vmhbaX.X.X.X) instead of the VMFS volume label Disk.RetryUnitAttention=1 6. When using a "bootfromsan" configuration with volume mappings from multiple Controllers you must enable persistent bindings to ensure that the Controller containing the Boot LUN is the lowest numbered SCSI target address (vmhba0:0). If persistent bindings are not set the boot lun can be re-ordered to a higher numbered target address causing the server to not boot. To set persistent bindings run the command "/usr/sbin/pbind.pl -A" while the Controller containing the boot LUN is vmhba0:0. 7. ESX 2.1, 2.5, 2.5.1 or 2.5.2 server customers we recommend the use of "LNXCL" as the host type to disable AVT. This reduces path thrashing behavior in the VM kernel during failover. In certain conditions this change can cause the vmkernel to hang on boot up due to a known issue with LUN discovery in ESX Server. Prior to implementing this change, please reference the IBM RedBook: "Implementing VMware ESX Server 2.1 with IBM System Storage DS4000 ", available from the IBM support web-site. The LUN discovery issue is to be corrected with the ESX Server 2.5 or later release. 8. MRU (Most Recently Used) is the supported failover policy when AVT is disabled, but for volumes that have 4 or more paths, RR is recommended over MRU for most applications. The recommended failover policy when using ALUA is RR. Preferred Path can be configured using the Storage Manager client and then using the Rescan SAN option within the Storage Management GUI. 9. LUNS should be assigned to the ESX Server starting with LUN number 0. 10. The Access LUN should not be mapped to the ESX server host. 11. The VMware ESX Server 2.1 and higher FC HBA driver includes failover support. You will need to have two HBAs in the server to enable failover. Using a single HBA is supported; but, you will not have path failover support. VMware ESX Server 2.5 or later does not have the capability to utilize FAStT Management Suite Java (FAStT MSJ) program for multi-path I/O configuration. Instead, use the VMware ESX server 2.5 instructions in the ESX Server 2.x Administration Guide to set up IO path fail-over support and to redistribute the LUNs between the FC HBAs. 12. Always use the VMware ESX Server driver for the FC HBA. Please refer to VMware documentation on how to install and configure these FC HBA drivers. 13. Unique partition labels should be used if multiple targets (multiple DS4000 ) are installed to insure proper labeling of logical volumes when the server is rebooted. 14. With ESX Server 2.1 or higher it is no longer necessary to reboot when creating or deleting LUNS. Use the Rescan SAN function within the Storage Management MUI to automatically recognize changes on the Storage Subsystem. 15. Please refer to the ESX Server Administrator guides and README for more info on how to setup storage in a SAN environment. 16. The DS4000 controller host ports or the Fibre Channel HBA ports can not be connected to a Cisco FC switch ports with "trunking" enable. You might encounter failover and failback problems if you do not change the Cisco FC switch port to "non-trunking" using the following procedure: a. Launch the Cicso FC switch Device Manager GUI. b. Select one or more ports by a single click. c. Right click the port(s) and select Configure, a new window pops up d. Select the "Trunk Config" tab from this window, a new window opens e. In this window under Admin, select the "non-trunk" radio button, it is set to auto by default. f. Refresh the entire fabric. 17. When making serial connections to the DS4000 storage controller, the baud rate is recommended to be set at either 38200 or 57600. Note: Do not make any connections to the DS4000 storage server serial ports unless instructed by IBM Support. Incorrect use of the serial port might result in loss of configuration and/or loss of data. 18. All enclosures (including DS4000 storage subsystem with internal drive slots) on any given drive loop/channel should have complete unique ID's, especially the single digit (x1) portion of the ID, assigned to them. For example, in a maximum configured DS4500 storage subsystem, enclosures on one redundant drive loop should be assigned with id's 10-17 and enclosures on the second drive loop should be assigned with id's 20-27. Enclosure id's with the same single digit such as 11, 21 and 31 should not be used on the same drive loop/channel. In addition, for enclosures with mechanical enclosure ID switch like DS4300 storage subsystems, EXP100 or EXP710 storage expansion enclosures, do not use enclosure ID value of 0. The reason is with the physical design and movement of the mechanical enclosure ID switch, it is possible to leave the switch in a “dead zone?between ID numbers, which return an incorrect enclosure ID to the storage management software. The most commonly returned enclosure ID is 0 (zero). In addition to causing the subsystem management software to report incorrect enclosure ID, this behavior also result in enclosure ID conflict error with the storage expansion enclosure or DS4000 storage subsystem intentionally set the ID to 0. The DS4200 and DS4700 storage subsystems and the EXP420 and EXP810 storage expansion enclosures did not have mechanical ID switches. Thus, they are not susceptible to this problem. In addition, these storage subsystems and storage expansion enclosures automatically set the Enclosure IDs. IBM recommendation is not make any changes to these settings unless the automatic enclosure ID settings resulting in non- unique single digit settings for enclosures (including the storage subsystems with internal drive slots) in a given drive loop/channel. 19. The DS4500 and DS4300 storage subsystem have new recommended drive-side cabling instructions. The DS4500 instructions are documented in the IBM System Storage DS4500 Installation, Users, and Maintenance Guide (GC27- 2051-00 or IBM P/N 42D3302). The DS4300 instructions are documented in the IBM System Storage DS4300 Installation, Users, and Maintenance Guide (GC26-7722-02 or IBM P/N 42D3300). Please follow the cabling instructions in these publication to cable the new DS4500 and DS4300 setup. If you have an existing DS4500 setup with four drive side minuhub installed that was cabled according to the previously recommended cabling instructions, please schedule down time as soon as possible to make changes to the drive side FC cabling. Refer to the IBM System Storage DS4500 and DS4300 Installation, Users, and Maintenance Guide for more information. 20. Running script files for specific configurations. Apply the appropriate scripts to your subsystem based on the instructions you have read in the publications or any instructions in the operating system readme file. A description of each script is shown below. - SameWWN.script: Setup RAID controllers to have the same World Wide Names. The World Wide Names (node) will be the same for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - DifferentWWN.script: Setup RAID controllers to have different World Wide Names. The World Wide Names (node) will be different for each controller pair. The NVSRAM default sets the RAID controllers to have the same World Wide Names. - DisableUA_reporting_inVMware.script: To configure the Storage Subsystem RAID Controller to return Not Ready sense data. Storage Subsystem RAID Controllers can be configured to return either the Unit Attention or Not Ready message when quiescent. A DS4000/DS5000 Storage Subsystem that is running Windows as a guest operating system should return Not Ready sense data when it is quiescent. Returning Unit Attention might cause the Windows guest to fail during a failover. =========================================================================== 4.0 WEB Sites and Support Phone Number -------------------------------------- 4.1 IBM System Storage Disk Storage Systems Technical Support web site: http://www.ibm.com/systems/support/storage/disk 4.2 IBM System Storage Marketing Web Site: http://www.ibm.com/systems/storage/ 4.3 IBM System Storage Interoperation Center (SSIC) web site: http://www.ibm.com/systems/support/storage/ssic/ 4.4 You can receive hardware service through IBM Services or through your IBM reseller, if your reseller is authorized by IBM to provide warranty service. See http://www.ibm.com/planetwide/ for support telephone numbers, or in the U.S. and Canada, call 1-800-IBM-SERV (1-800-426- 7378). IMPORTANT: You should download the latest version of the DS Storage Manager host software, the DS4000/DS5000 storage subsystem controller firmware, the DS4000/DS5000 drive expansion enclosure ESM firmware and the drive firmware at the time of the initial installation and when product updates become available. For more information about how to register for support notifications, see the following IBM Support Web page: ftp.software.ibm.com/systems/support/tools/mynotifications/overview.pdf You can also check the Stay Informed section of the IBM Disk Support Web site, at the following address: www.ibm.com/systems/support/storage/disk ======================================================================= 5.0 Trademarks and Notices -------------------------- The following terms are trademarks of the IBM Corporation in the United States or other countries or both: IBM DS3000 DS3500 DCS3700 DCS3860 DS4000 DS5000 FAStT System Storage the e-business logo xSeries pSeries HelpCenter Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United states, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds. ESX Server is a registered trademark of VMware Corporation. Other company, product, and service names may be trademarks or service marks of others. ======================================================================= 6.0 Disclaimer --------------- THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IBM DISCLAIMS ALL WARRANTIES, WHETHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE AND MERCHANTABILITY WITH RESPECT TO THE INFORMATION IN THIS DOCUMENT. BY FURNISHING THIS DOCUMENT, IBM GRANTS NO LICENSES TO ANY PATENTS OR COPYRIGHTS. Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corporation. Refer to the appropriate Storage Manager Client package for the Operating system that you plan to use for management of the DS4000 Storage Subsystem.