IBM InfoSphere Identity Insight Version 8.1 Fix Pack 2 - Release Notes

These release notes contain information to ensure the successful installation and use of IBM® InfoSphere® Identity Insight Version 8.1 Fix Pack 2. Included is information about updates, fixed problems, usage notes, and known problems.

These release notes are provided with the installation download and media and are included in the product information center. The very latest version is available from the IBM InfoSphere Identity Insight Support portal.

Contents

About IBM InfoSphere Identity Insight

IBM InfoSphere Identity Insight helps organizations solve business problems related to recognizing the true identity of someone or something ("who is who") and determining the potential value or danger of relationships ("who knows who") among customers, employees, vendors, and other external forces. IBM InfoSphere Identity Insight provides immediate and actionable information to help prevent threat, fraud, abuse, and collusion in all industries.

This product was formerly titled IBM Relationship Resolution.

Summary of product enhancements and fixes for Version 8.1 Fix Pack 2

Fix Pack 2 includes the following product enhancements and fixes:
Pipeline improvements and fixes
  • DSRC_ACCT.SRC_CREATE_DT field update fix
  • Pipeline logging forced resolution to ER_FORCED_LOG fix
  • Race condition in Address Parser fix
  • Populating LOAD_ID and LOAD_GROUP columns in the UMF_LOG table fix
  • Minimum Alert threshold fix
  • Disclosed relation fix UMF loader handling null values fix
  • CEP error logging fix
  • Patched binaries for better NC_COA initialization
  • Wrapping the sendMessage() method in a try/catch block.
Java
Recompilation of Java (common_java/b8.1.0.1) to work with Oracle 11.2.0.x.0.
Cognos
The entity resume does not complete if there are alerts related to the entity (Cognos) - / Modified RESUME_CONFLICTS view in FP SQL.
Web services
  • Improvement to search.
  • getEntityNetworkStatistcs fix for soft deleted entities.
ILOG
  • Performance improvements.
  • Error Handling improvements.
  • Logging enhancement.
  • New SOA calls (getEntityDetailMultiple() and getDirectEntityRelationshipsMultiple()).
  • Re-resolve to pipeline sometimes gets pipeline error fix.
  • Addition of five new parameters affecting SOA functions and pre-fetching, fixed two issues with pre-fetching, fixed memory consumption of no longer used graphs.
  • Added a new feature to stop pre-fetching for no longer used graphs.
  • Fixed data integrity issue; allow entity with no attributes be drawn in a graph.
EntitySearcher
Changed the default time-out value for entity searcher.
Note: Fix Pack 2 includes the fixes from FP1. FP1 fixes are described in the product information center at http://pic.dhe.ibm.com/infocenter/easrr/v8r1m0/index.jsp.

Product documentation

Version 8.1 Fix Pack 2 product documentation can be found in the following places:
Product installation downloads
Contain release notes with additional installation instructions.
Version 8.1 information center
Access at ibm.com®: http://pic.dhe.ibm.com/infocenter/easrr/v8r1m0/index.jsp
Access on your local server, installed as part of the product install. The information center can be launched in the following ways:
  • Click the Help button in the Configuration Console.
  • Click the Help button in the Visualizer.
  • Open a Web browser and enter the URL for your server: http://<servername>:<HTTP port#>/help/index.jsp
    <HTTP port#>
    WebSphere® Application Server port number that you specify during installation.
    <servername>
    WebSphere Application Server hostname or IP address.
IBM product Support home
Access at ibm.com: http://www.ibm.com/support/entry/portal/Software/Information_Management/InfoSphere_Identity_Insight
In addition to Technotes and other support information, the portal contains links to the information center, PDF versions of the product information, and the latest updates of the release notes.

Installing Fix Pack 2 for IBM InfoSphere Identity Insight Version 8.1

There are four basic Fix Pack 2 installation scenarios:
Version 8.1 installed - add Fix Pack 2
You have installed Version 8.1. Install Fix Pack 2 using the fix pack installation program. FP2 includes all of the fixes released with 8.1 GA release, including Fix Pack 1.
Version 8.1 with Fix Pack 1 installed - add Fix Pack 2
You have installed Version 8.1 and Fix Pack 1. Install Fix Pack 2 using the fix pack installation programs.
Version 8.1 is not installed
You must first install or upgrade to Version 8.1. Then install Fix Pack 2.
Version 8.1 Fix Pack 1 with IBM Informix
This installation is for customers who want to use an IBM Informix database. It requires a separate installation program that installs a new Version 8.1 system adapted for Informix plus the enhancements and fixes that are part of Fix Pack 1. Note: This special Informix installer installs the product up to Version 8.1 FP1. Contact IBM Services or IBM Support to obtain this version of the product.

For more information see: product Support portal. Planning, installation, and configuration information is also available in the product information center at ibm.com. Note the installation-related items in this document.

MD5 Checksum information

You can use the following MD5 checksum information to check for error introduced during transmission or storage:
4bd632be01e68867cf8a2ab9562a04b6  Aix_pwr5_8.1_fp2.tar
c0c4e5106fb22073f25d80a16685a218  hpux_ia64_8.1_fp2.tar
f900e7f8b42e716479bdc79ceaad63e7  linux_s390x_8.1_fp2.tar
a5ea57cedef64eef0ea690066f9801af  linux_x64_8.1_fp2.tar
459a28a5ef79af83d4c101090a70be0d  linux_x86_8.1_fp2.tar
eb0b279bf873de40cd1b0592346474bf  solaris_sparcv10_8.1_fp2.tar
645b853ee42b365f6cde04d20d1a758c  win_x64_8.1_fp2.zip

Required actions and considerations before installing Fix Pack 2

Important: You must do the following before installing or upgrading to Fix Pack 2.
Pipeline
Windows installation: Pipeline services must be stopped and deleted prior to upgrade. When installing on Windows, stop and delete old pipeline services prior to running the installer to upgrade to Version 8.1 Fix Pack 2.

Non-Windows Installation: Make sure that no pipeline is running. The upgrade will fail if a pipeline is running.

Embedded WebSphere Application Server
Before you install, stop the embedded version of WAS. Go to the install root directory:
  • On Windows: StopEAS.bat
  • On other platforms: stopEAS.sh
Installation directory and Identity Insight schema
Back up your installation directory and Identity Insight schema before upgrading.
Database Environment:
Validate that the database directory and instance variables are set for different database type.
User ID required for upgrade:
The person performing the install should use the same user id that was used to install the product.
Note: Upgrades should always be done by logging on as the owner and not root.
Upgrading from future v8.0 fix packs and hotfixes
This fix pack supports upgrading from Version 8.1 to Version 8.1 Fix Pack 2. Version 8.1 Fix Pack 2 includes:
  • All previous hotfixes on the v8.1 branch.
  • All previous hotfixes and fix packs from the v8.0 branch, up to and including 8.0.0.147.
If you apply hotfixes after hotfix 147 to your v8.0 branch installation, please contact IBM Support before upgrading to Version 8.1 Fix Pack 2.

Considerations and issues for Microsoft Windows 7 and Microsoft SQL Server 2008 users

If you use IBM InfoSphere Identity Insight on a Microsoft Windows 7 client or use Microsoft SQL 2008 as your database, be aware of the following before installing:

Windows Server 2003 or 2008 for pipeline or Application Server with a DB2® v9.5 or v9.7 database server:
A potential issue exists for IBM Identity Insight Version 8.1 customers who are using Windows Server 2003 or Windows Server 2008 operating systems for the pipeline or Application Server and IBM DB2 Version 9.5 or Version 9.7 database server. Latin-1 or UTF-8 data may not be encoded correctly by Identity Insight Version 8.1 with this operating system-database combination. If you are using a DB2 v9.5 or v9.7 database, you are strongly encouraged to install IBM Identity Insight Version 8.1 in a test environment and verify correct encoding of Latin-1 or UTF-8 data. Check the following columns:

Table=NAME
Columns=LAST_NAME, FIRST_NAME, MID_NAME, NAME_PFX, NAME_SFX, NAME_GEN

Table=ADDRESS
Columns=ADDR1, ADDR2, ADDR3, CITY, STATE

Table=ATTRIBUTE
Column=ATTR_VALUE

If the data in any of these tables appear to be incorrectly encoded, check the following Environment variable is set on your Windows pipeline server:
DB2CODEPAGE
This should be set to the same value as the DB2 Database 'CODEPAGE' variable. For example, if the DB2 Database configuration is:
CODEPAGE=1208
CODESET=UTF-8
The DB2CODEPAGE environment variable should be set as follows:
set DB2CODEPAGE=1208

Considerations and issues for Oracle database users

If you use an Oracle database with IBM InfoSphere Identity Insight, be aware of the following additional known issues:
Do not segment or partition database tables that are used for UMF input transports
Database tables that are used for UMF input transports must not be segmented or partitioned. Segmenting or partitioning these transport tables will result in errors in the pipeline transport and lost data.
Visualizer usage note for Find-by-Attribute function (using an Oracle database)
This issue is resolved.

Considerations and issues for IBM Informix users

If you use IBM Informix as your database, be aware of the following additional known issues:

IBM Informix Dynamic Server Release 11.70.xC4DE database support
Support for IBM Informix Dynamic Server Release 11.70.xC4DE databases (Ultimate Edition and Ultimate Warehouse edition) is provided with the following Application Server architectures (Native OS implementation only):
  • IBM AIX 6.1 - 64 bit - POWER 5/6
  • Hewlett-Packard HP/UX 11i v2 - 64 bit - IA64
  • Linux x-86
  • Linux x86_64
  • 64-bit Linux on System z
  • Novell SUSE Enterprise Linux 10 - 64-bit - IBM System z only
  • Novell SUSE Enterprise Linux 11 - 64-bit, x86_64
  • RedHat Enterprise Linux 5 - 64-bit, x86_64
  • Sun/Oracle Solaris 10 - 64 bit - UltraSPARC IV (and newer compatible)
  • Microsoft Windows 2008 Server - 64-bit, x86_64
Note: IBM Informix Dynamic Server 11.7 DataBlade functionality is not supported.

See the section on system requirements for detailed information.

Installation prerequisites and setup information for Informix
The environment variables INFORMIXSERVER, INFORMIXDIR must be set before you run the installer to properly configure an Informix installation.
  • INFORMIXSERVER should point to the instance of the target DB server, for example, ol_informix1170.
  • INFORMIXDIR should point to the root installation directory where the Informix client or server package is installed. For example, on UNIX you can install under /opt/IBM/informix and may have a version number appended. For Windows, it is usually in Program Files/IBM/Informix.
Minimum page sizes
Page sizes for IBM Identity Insight databases managed with Informix need to be created with a minimum page size of 8K. If this is not done, installation will fail with an error similar to the following:
java.sql.SQLException: Total length of columns in constraint is too long. 
Logging mode
Logging mode must be enabled for the Informix database when it is created. Do not use the MODE ANSI keyword. Doing so can break pipeline transaction-handling functions.
Running SUIT
When installing to an Informix database, the Schema Upgrade and Installation Tool (SUIT) must be run with the following command line parameter: -mbi 1. This disables statement "batching" and is required for version 11.50 and 11.70. If this is not done, the SUIT operation will fail with the error "maximum statement length exceeded" and abort. See SUIT: Schema Upgrade and Installation Tool version 8.1.0.2.
Environment variables
Informix has two optional environment variables that must be set before running the installation: CLIENT_LOCALE and DB_LOCALE. The CLIENT and DB locale values must be the same or their code sets must be convertible. It is best if both are the same, but both must be UTF8. codeset name and language name must both be lower case when used in the JDBC call.
Enabling logging for Transactional support
In order for transactions to work with Informix, the database must be created with logging enabled. You can do this through dbaccess using the menu options, or specify it when using DDL:
CREATE DATABASE <database name> WITH LOG
Transaction-handling configuration
The following configuration settings are required for transaction-handling:
  • Row-Level table locking DEF_TABLE_LOCKMODE=ROW in onconfig database-configuration file
  • Row lock handling SET LOCK MODE TO WAIT on database connection definition.
  • Choice of plan optimization: UPDATE STATISTICS HIGH on database after some data has been loaded. This does not work on an empty database.
Required entries for LD_LIBRARY_PATH:
  1. Add the following required entries for LD_LIBRARY_PATH:
    • $INFORMIXDIR/lib
    • $INFORMIXDIR/lib/cli
    • $INFORMIXDIR/lib/esql
  2. Ensure that $INFORMIXDIR/bin is in the $PATH.
  3. After setting up the Informix database, you must update the ODBC configuration. For Unix and Linux, see: http://publib.boulder.ibm.com/infocenter/idshelp/v117/topic/com.ibm.odbc.doc/ids_odbc_062.html (add link). For all platforms: http://publib.boulder.ibm.com/infocenter/idshelp/v117/index.jsp?topic=/com.ibm.relnotes.doc/notes/csdk_370xc3/mach/odbc.html
JDBC files
Make sure that $INFORMIX/jdbc/lib exists before installing. Some Informix installations do not automatically create the jdbc files. These files must exist before installing:
C:\PROGRA~1\informix_1170\jdbc\lib>dir
Volume in drive C has no label.
Volume Serial Number is D477-2F8F

Directory of C:\PROGRA~1\informix_1170\jdbc\lib

18/07/2012  12:31          <DIR>     .
18/07/2012  12:31          <DIR>     ..
23/09/2011  20:01          816.096 ifxjdbc.jar
23/09/2011  19:58            44.939 ifxjdbcx.jar
23/09/2011  19:58        1.585.532 ifxlang.jar
23/09/2011  19:58          307.332 ifxlsupp.jar
23/09/2011  19:58          806.318 ifxsqlj.jar
23/09/2011  19:58            48.982 ifxtools.jar
              6 File(s)      3.609.199 bytes
              2 Dir(s)  1.093.832.704 bytes free 
Do not segment or partition database tables that are used for UMF input transports
Database tables that are used for UMF input transports must not be segmented or partitioned. Segmenting or partitioning these transport tables will result in errors in the pipeline transport and lost data.
Known issue: Informix sometimes returns trailing spaces on data
Informix sometimes returns trailing spaces on data. If this occurs:
  1. Edit the on_config_<server name> file, which is located in $INFORMIXDIR/etc.
  2. Set the following parameter:
    IFX_LEGACY_CONCAT 1
  3. Then restart the server.

Considerations if you have custom Output Documents or DQM Rules

If you have previously worked with IBM Professional Services, Support, or Engineering to create additional UMF Output Documents in your system (UMF_OUTPUT_RULE, UMF_OUTPUT_FORMAT, UMF_OUTPUT_PARAM table entries) note that Fix Pack 2 adds new entries to these tables in the RULE_ID 10000-30000 range. Any custom entries that you have created may conflict with these new entries, causing issues with the upgrade. Contact IBM Support prior to initiating this upgrade to Fix Pack 2.

UMF Output Documents cannot be created or deleted via the Configuration Console, but may be viewed via: Setup --> UMF --> Output Documents.

If you have added new DQM Rules without using the Configuration Console, your custom DQM Rules may also conflict with new rules being added in this release. Your DQM Rules should be numbered 1000000 or higher to avoid conflicts. Check the DQM_RULE.RULE_ID column to check for any new rules that you have added. If you have potentially conflicting rules, contact IBM Support prior to initiating this upgrade to Fix Pack 2.

Modified views and deprecated tables

Upgrading to Fix Pack 2 will modify existing views. No tables or columns are deprecated with this upgrade.

Fix Pack 2 modified or added views include:

Modified views
Installing Fix Pack 2 overwrites the product views. If you have modified any of the listed views, back up your changes to prevent them from being lost.
Fix Pack 2 modified or added views include:
BEST_ENTITY_INFO
CONFLICT_RPT
SOA_ALERT_ENTITY_LIST
VIS_RELATEDENTITIES
VIS_MAA_ASGN_DET
VIS_MAA_UNASGN_DET
VIS_RELATIONSHIP_SUMMARY
RESUME_CONFLICTS
VIS_GEM_EVENT_ALERT_UNASGN_DET
VIS_GEM_EVENT_ALERT_ASGN_DET
VIS_GEM_EVENT_ALERT_DET
COG_ROLE_ALERT_DETAIL
COG_RESUME_CONFLICTS
RPT_RE_UNION
RPT_RESUME_RELS1_SUB
RPT_RESUME_RELS2_SUB
COG_RELATIONSHIP_SUMMARY
COG_CONFLICT_PATHS
VIS_INBOX_GET_RULE
VIS_INBOX_ROLE_ALERT_RAW
VIS_INBOX_ROLE_ALERT
COG_INBOX_ROLE_ALERT
COG_ROLE_ALERT_DETAIL
VIS_INBOX_ROLE_ALERT_RAW_ASGN
VIS_INBOX_ROLE_ALERT_RAW_CLSD
VIS_INBOX_ROLE_ALERT_ASGN
VIEW VIS_INBOX_ROLE_ALERT_CLSD
VIS_RA_UNASGN_SUM
COG_RPT_RE_UNION
COG_RELATED_ENTITIES
SOA_RELATED_ENTITIES
SOA_ENT_NTWRK_STATS
SOA_ENTITY_SUMMARIES
SOA_ALERT_ENTITY_LIST
New views
SEP_RELATIONS_VALID
Deprecated tables and columns
None.
Note: Backing up your modified view and tables to prevent loss is also necessary if you are upgrading to Version 8.1 in preparation for installing Fix Pack 2. See related information in the information center (or PDF version) under Installing and upgrading > Upgrading the product > Upgrade items > Customized views overwritten or deleted during upgrade.

Starting the Fix Pack 2 installation program

Complete the following steps to start the product installation program for install Fix Pack 2.
On Microsoft Windows:
You must copy the product installation file to a local drive. The product installation program will not run from either the installation media or from a network drive.
On AIX, HP-UX, Linux, and Solaris:
To enable the License-print function within the Installer running in GUI mode, you need to define your printer within the X-windows subsystem that you are running on the client machine.
To enable the License-print function within the Installer running in command line mode, you need to set up a default print-queue and printer on the machine you are installing on.
  1. Obtain the IBM InfoSphere Identity Insight product software download package.
  2. Do one of the following steps:
    1. On Microsoft Windows: If obtaining a .tar file, unzip the file to temporary directory on a local drive of the target installation machine.
      Note: On Microsoft Windows, you must copy the product installation file to a local drive. The product installation program will not run from either the installation media or from a network drive. Ensure that the .tar file is unzipped with the directory structure intact.
    2. On AIX, HP-UX, Linux, and Solaris: If obtaining a .tar file, unzip the file to temporary directory on a local drive of the target installation machine.
      Note: Ensure that the .tar file is unzipped with the directory structure intact. Ensure that the product installation file's parent directory structure of \Disk1\InstData\VM\ is retained if you copy the installation file to another location.
  3. Navigate to the /platform/Install/Disk1/InstData/VM/ directory, and run the installer program.
    1. To run the installer in GUI mode, double-click or launch the installer program.
    2. To run the installer in command line mode, from the command line, append -i console when executing the installer program.
      For example: prompt> ISII_81_FP2_aix_ppc.bin -i console
      Operating System platform Installer file
      Microsoft Windows Server x86_64 ISII_81_FP2_win_x64.exe
      IBM AIX ISII_81_FP2_aix_ppc.bin
      HP-UX ISII_81_FP2_hpux_ia64.bin
      Linux x86 ISII_81_FP2_linux_x86.bin
      Linux x86_64 ISII_81_FP2_linux_x64.bin
      64-bit Linux on System z ISII_81_FP2_linux_s390x.bin
      Sun Solaris ISII_81_FP2_solaris_sparc.bin
  4. On the License Agreement - Software License Agreement panel, review the license agreement and select the Agree to Continue button.
  5. Follow the instructions on the installation program wizard or the command line.

Completing Fix Pack 2 installation

For an upgrade installation, all product features will be installed, even if the previous installation did not have all features installed. Complete the following installation program panels to install Fix Pack 2.

  1. On the Introduction panel, review the screen.
  2. On the License Agreement - Software License Agreement panel, review the license agreement and select the Agree to Continue button.
  3. On the Destination - Choose Install Folder panel, type or browse to the directory (fully qualified path) where IBM InfoSphere Identity Insight is installed.
    Note: If browsing to an installation directory, you must click the Browse button, then browse to the directory one level above the install directory (create the new directory if needed). Then select the install directory and click the Open button.
  4. On the Product Features panel, you will see all updates and enhancement included in this fix pack.
  5. On the Database Information panel, review the installed database information on the screen. It should be pre-filled. Validate to ensure you are updating the right database. Note the database information. You will need it for a database update task after the product upgrade and installation is complete.
  6. On the Database Update panel, select the option to skip database population. You must run SUIT to upgrade your database after the installation or upgrade is complete. See Updating the database with SUIT.
  7. Review the Cognos Report panel.
  8. On the Pre-Installation Summary panel, review the summary. Click the Previous button if any changes are needed. Then click the Install button to start the product upgrade. After installation completes,
    Note: When installing on Solaris systems, you might see the No such file or directory message on the final window. You can safely ignore this warning message.

Verifying Fix Pack 2 installation

Verify that you have successfully upgraded to Fix Pack 2:
  1. Verify that the pipeline has been updated. Run the pipeline command without options and confirm that the output shows:
    ** pipeline v8.1.0.120
  2. Verify that Web services have been updated. Open a Web browser and enter the URL for your installation: http://w2k8qa1.svl.ibm.com:17110/easws/api/soap (example). You should see the following:
    8.1.0.120 - 20130308_1058
    - - - - - - - - - - - - - - - - - - - - 
    RESOURCE_SUBDIRECTORY = soap
    WRAPPED_DOCUMENT_LITERAL = true
    SERVICE_NAME = EntityResolver
Note: The exact numbers for the fix pack builds (above) will be in the downloadable version of these release note on at ibm.com.

Updating the database with SUIT

You must update the database that you are using with IBM Identity Insight after the product is installed or upgraded if you chose to skip the database population on the Database Update panel (recommended above). Use the Schema Upgrade and Installation Tool (SUIT) to update the database. You must update the following:
  • RELRES.NO_HIST
  • EAS-Console
  • CMEAdmin
You can use SUIT to make changes directly to the database with the -auto option, but it is usually preferable to direct the SUIT output to an SQL file that you can examine before running against the database.
  1. Update the database that you are using with Identity Insight with the SUIT .
    Table 1.
    Database Example Notes
    Informix
    suit -t informix -s dbHost -d 
    dbName -u jdoe -n ol_informix 
    RELRES.NO_HIST upgrade -auto
    Automatically upgrade RELRES.NO_HIST on IBM Informix host dbHost (default port 9088), database name dbName, server name 1_informix, user ID 'jdoe'. Will prompt for password.
    DB2
    suit -t db2 -s dbHost -o 50000 -u 
    db2inst1 -d dbInstance 
    RELRES.NO_HIST upgrade
    Print SQL to upgrade RELRES.NO_HIST on DB2 host dbHost, port 50000, instance dbInstance, user ID 'db2inst1'. Will prompt for password.
    Oracle
    suit -t oracle -s dbHost -u frank -d 
    dbInstance RELRES.NO_HIST install
    Print SQL to install RELRES.NO_HIST on Oracle host dbHost, instance dbInstance, user ID 'frank'. Will prompt for password.
    SQL Server
    suit -t mssql2008 -s dbHost -d 
    dbName -u jdoe RELRES.NO_HIST 
    upgrade -auto
    Automatically upgrade RELRES.NO_HIST on SQL Server 2008 host dbHost (default port 1433), database name dbName, user ID 'jdoe'. Will prompt for password.
    DB2
    suit -t db2 -s dbHost -d dbName -u 
    jdoe -n ol_db2 RELRES.NO_HIST upgrade
    > 8.1_FP2_Upgrade_RELRES.NO_HIST.sql
    Save the SQL update to a file (8.1_FP2_Upgrade_RELRES.NO_HIST.sql) to be run after examining it.

    See SUIT: Schema Upgrade and Installation Tool version 8.1.0.2 for a list of SUIT commands and options.

Graph server initialization parameters

Server administrators can set graph server initialization parameters in the graph.properties file to improve performance. These parameters are:
getEntityDetailMultiple_chunkSize=nn
10 is the default. 1 is the minimum.
getEntityDetailMultiple_usePreFetching=true/false
True is the default and is recommended.
getEntityRelationshipsMultiple_chunksize=nn
10 is the default. 1 is the minimum.
getEntityRelationshipsMultiple_usePreFetching=true/false
True is the default and is recommended.
getEntityRelationshipsMultiple_limit=nnnnn
10000 is the default. 1 is the minimum.
To set these parameters:
  1. Open the graph.properties file, located in the srd-home/graphs.
  2. Look for and edit the line that contains one of the above parameters or add a line if the one you need is not already there.
  3. Save the file.
  4. Restart the graph server. Changes will not take effect until the graph server is restarted.
The default value is used for any absent or unspecified parameters.

chunkSize parameters

The purpose of getEntityDetailMultiple_chunkSize and getDirectEntityRelationshipsMultiple_chunkSize is to avoid long running queries, consuming large amounts of SOA server memory and pipeline memory in a single SOA call. The larger the values used for those two parameters, the fewer total SOA calls will be made to the pipeline, and the faster the total amount of data for the graph will be retrieved. With larger chunk sizes each SOA call will require more memory.

For example, if there are 150 entities in a given level of the graph, then instead of making one a single call to getEntityDetailMultiple for all 150 entities at once, the graph server will instead make 15 calls (150 / getEntityDetailMultiple_chunkSize, with chunkSize=10).

This parameter can be adjusted based on your data and hardware to obtain an optimal balance. This is also true for getDirectEntityRelationshipsMultiple_chunkSize.

PreFetching parameters

Once the graph has been displayed, the pre-fetching feature will continue to gather information about the entities on the graph. This additional information makes the current graph operate more efficiently.

The purpose of getEntityDetailMultiple_usePreFetching and getDirectEntityRelationshipsMultiple_usePreFetching is to throttle back or turn off prefetching.

You can control prefetching as follows:
Prefetching is completely off
Set getEntityDetailMultiple_usePreFetching=false with getDirectEntityRelationshipsMultiple_usePreFetching=false.
Prefetching gather relationships only
Set getEntityDetailMultiple_usePreFetching=false with getDirectEntityRelationshipsMultiple_usePreFetching=true.
Prefetching gather entity data only
Set getEntityDetailMultiple_usePreFetching=true and getDirectEntityRelationshipsMultiple_usePreFetching=false.

Parameter getDirectEntityRelationshipsMultiple_limit

The graph will attempt to identify which entities on the graph are related to each other beyond the relationship to the primary entity. This is a time consuming feature. The graph will attempt to find these relationships for the first n number of entities, where n=getDirectEntityRelationshipsMultiple_limit. Setting this value to 1 essentially turns it off. Graph performance will go up, but some entities will not be identified as related on the graph.

For example, E1 → E2 and E1 → E3 and E1 → E4. The graph always shows this correctly. However, E2, E3, and E4 may be related to each other. Set getDirectEntityRelationshipsMultiple_limit= 1 to prevent showing these secondary relationships.

The blue rectangle on each entity now identifies the number of relationships not displayed on the graph regardless of whether the related entity is displayed on the graph.

The following related improvements are part of this new function:
Locking granularity has been fine-tuned
Locking granularity has been fine-tuned to allow for better interleaving between server-side thread handling user commands from browser and the server-side prefetching thread.
Five new graph server initialization parameters have been added
Server administrators can set additional graph server initialization parameters in the graph.properties file to improve performance.
Known behaviors: getDirectEntityRelationshipsMultiple() and getDirectEntityRelationships() return null in both the following scenarios:
  • Entity exists but has no relationships.
  • Entity does not exist.

Known issues and changes when using the product

Be aware of the following considerations and known issues related to using the product:

Match/Merge processing and From/Thru dates
If you have a specific requirement for From/Thru dates to be taken into account during Match/Merge processing, you should explicitly set the DATERANGETHRESHOLD value to '0' in the pipeline.ini file as part of this FP1 upgrade. This ensures that From/Thru dates continue to be honored. Note that setting this to '0' is not the recommended value. Please contact IBM Support for further information and guidance.

If you have previously set [MM] DATERANGETHRESHOLD = -1 in your configuration, your system will now correctly ignore the From/Thru dates during Match/Merge processing.

Do not specify ODBC Isolation level=1 (uncommitted Read) on any ODBC client
Do not specify ODBC Isolation level=1 (uncommitted Read) on any ODBC client. This is particularly true when using a multi-threaded pipeline, or multiple pipelines. Doing so can cause data corruption or unexpected pipeline shutdown. Example pipeline output:
08/07 14:49:38 [pipeline:503380304] CRIT: CRITICAL ERROR:
08/07 14:49:38 [pipeline:503380304] CRIT: {
08/07 14:49:38 [pipeline:503380304] CRIT:  Requested resolve config for an invalid entity type: 0
08/07 14:49:38 [pipeline:503380304] CRIT:  Check logs or UMF_EXCEPT table for more information.
08/07 14:49:38 [pipeline:503380304] CRIT: }  
MERGE_ID and NUM_MERGED tags not used or required
The <MERGE_ID> and <NUM_MERGED> tags are not used or required by the pipeline. If you use them, they will be ignored by the pipeline. Only a single <MERGE> segment can be present in a UMF input document.
ILOG graph generates error for custom number types
The Attribute Alert section of the ILOG graphing tool can generate an error when displaying an entity that contains custom number type.

If new Number-Types or Attribute-Types are added to the system via the Configuration Console, you must restart the pipelines and the "Graph server" within eWAS (or the entire eWAS) before the iLOG Graph can display these new types correctly.

Viewing ILOG graphs
Do not to use the browser back arrow when viewing ILOG graphs.

It’s possible to get an unresponsive script when viewing graphs in Firefox or other browsers. Click the "Don't show message" check box and the "Continue running script" option to prevent subsequent messages and issues when loading graphs in the browser.

MATCH_ID column on the SEP_CONFLICT_REL table
The MATCH_ID column on the SEP_CONFLICT_REL table is currently defined as a NUMBER (10). This can artificially limit the size of values that can be stored in this table and cause SQL insert failures and pipeline shutdown with large data volumes. To allow larger Match-ID numbers to be stored, this column-type for the MATCH_ID column can be safely altered to be: "max" precision / BIGINT.
Handling names that include single-name aliases (Main Name 'a/k/a' Alias Name)
To handle names that include single-name aliases (Main Name 'a/k/a' Alias Name) do the following:
  1. The ' Main Name' can be processed using the standard 'M' <NAME_TYPE> tag, in either an unparsed (<FULL_NAME>) or parsed (<FIRST_NAME>, <LAST_NAME>) format.
  2. An alias name can be encoded as follows:
    <NAME_TYPE>A</NAME_TYPE> 
    <DSRC_CODE>+<DSRC_ACCT> = same as the Main Name record
    The alias name is passed in via the <FULL_NAME> tag DQM Rule #289 (Alternate Parse) enabled for NAME segment.

    This approach enables the alias name to be encoded in a way that maximizes the possibility of name-matching. This is particularly useful if the alias is a single-token name.

Configuration utility (eacfg) known behavior
In Solaris and AIX environments when running the configuration utility (eacfg) with LOCALE set to 'en_UTF8' , it is possible that a secondary window will appear that steals focus from the main window. It does not affect the operation of the tool and can be worked around by temporarily setting $LOCALE='C' prior to launching the tool.
DateCriterion object in SOAP or REST service requests
When using the DateCriterionobject in SOAP or REST service requests, you should be aware of how date values are stored in the database (ATTRIBUTE.ATTR_VALUE). For example, if a particular date attribute is stored without a timestamp (i.e. date only), it is possible that matches will not be found if values are submitted with timestamps. If you are using a SOAP client that forces a timestamp value and you are searching for DOB, you can supply a 00:00:00 timestamp and use a withDays value of 1. This will have the side-effect of forcing the search to use EQ/find-by attribute whenever a DOB is supplied.

If your SOAP client allows you to specify a Date with no timestamp, then make sure the adjustment is in place when searching for dates without timestamps. If you are creating a SOAP client with Java or other programming language, it is possible to format the date in your client code to be date only. Use standard date-formatting functions supplied by the language provider.

UMF pipeline processing
Order matters when it comes to FROM AND THROUGH DATE in pipeline processing. When sending data that has FROM AND THROUGH DATE, not having it in the proper order can create problems.

To see the latest information about known problems and issues

Known problems are documented in the form of individual technotes in the Support portal at http://www.ibm.com/support/entry/portal/Software/Information_Management/InfoSphere_Identity_Insight:
  1. Use the Search Support feature and in the Enter terms, error code or APAR # field, enter a keyword, phrase, error code, or APAR number to search on.
  2. Select Solve a problem.
  3. Click Search.

As problems are discovered and resolved, the IBM Support team updates the Support portal. By searching the Support portal, you can quickly find solutions to problems.

At time of publication, there were no known installation problems. Check the Support portal for the most current information.

System requirements updates

For the latest information about hardware and software compatibility, see the detailed system requirements document at http://www.ibm.com/support/entry/portal/Software/Information_Management/InfoSphere_Identity_Insight.

SUIT: Schema Upgrade and Installation Tool version 8.1.0.2

SUIT is a Schema Upgrade and Installation Tool that comes with IBM Identity Insight.

When installing to an Informix database, the Schema Upgrade and Installation Tool (SUIT) must be run with the following command line parameter: -mbi 1. This disables statement "batching" and is required for version 11.50 and 11.70. If this is not done, the SUIT operation will fail with the error "maximum statement length exceeded" and abort.

SUIT is used to update the schemas for other databases as part of the installation program and is located in II installer\suit\sql.

suit [-t <dbtype>] -s <server> [-u <uid>] [-p <pwd>] -d <database> [-n <servername>]
             [-o <port>] [-c <schema>] [-x <xmlhome>] [-l <xslhome>]
             [-tokenProperties <tokenprops>]
             [-enc <encoding>] <product> <action> [-auto] [-v]

<dbtype> is one of:
    oracle: Oracle  (Default on Solaris and Linux)
    mssql2008:  Microsoft SQL Server 2008 (Default on Windows)
    db2:    IBM DB2 (Default on AIX)
    informix:   IBM Informix

<server>      is the database Host Name (or IP address)

<uid>         is the user ID or Oracle schema name
<pwd>         is the password (if unspecified the user is prompted for it)
<database>    is the instance (Oracle) or database (DB2, SQL Server)
<servername>  is the name of the informix server (Informix)
<port>        is the port number the database listens on. Default if unspecified
.
<schema>      is the name of the DB2 schema to use. Default schema is
              used if unspecified. (DB2 only)

<xmlhome>     is the path to the directory where the product XML files reside
              (default is "./xml")

<xslhome>     is the path to the directory where the suit XSL files reside
              (default is "xslt")

<tokenprops>  is the path to a properties file containing token keys
              and string replacement values.

<encoding>    is the name of the output character encoding used for all text
              output (stdout and stderr). If unspecified, the JRE default is used.
              Supported encodings include UTF-8, UTF-16, UTF-16LE, UTF-16BE,
              ISO-8859-1 (ISO Latin-1), cp1252 (Windows Latin-1). Other encodings
              may be supported by your Java environment, see:
              http://www.iana.org/assignments/character-sets

<product>    is one of:
    NO PRODUCTS FOUND
    (access to different products may be possible via the "-x <xmlhome>" option)


<action>    is one of:
    install [-auto]
       Prints the SQL for creating a new schema for this
       product. Using -auto will automatically create the schema.

    upgrade [-auto]
       Prints the SQL for upgrading the schema for this product
       to the current version from the specified version.
       Using -auto will automatically upgrade the schema (not
       recommended for most systems).

    verify [-v]
       Examines the current schema against the master schema, and
       prints information about any missing objects.  Specifying
       -v will cause information about extra objects to be generated as well.

Licensed Materials - Property of IBM. Copyright IBM Corporation 2003, 2013. A ll Rights Reserved. US Government Users Restricted Rights - Use, duplication or disclosure restricte d by GSA ADP Schedule Contract with IBM Corporation. IBM and the IBM logo are registered trademarks of International Business Machines Corporation in the United States, other countries, or both.