VIOS 2.2.2.3 FP26_SP02 PDF Print E-mail
User Rating: / 9
PoorBest 
Written by Michael Felt   

Package information

PACKAGE: Update Release 2.2.2.3
IOSLEVEL: 2.2.2.3

VIOS level is NIM Master level must be equal to or higher than
Service Pack 02 for Update release 2.2.2.3 AIX 6100-08-03

General package notes

Review the list of fixes included in Update Release 2.2.2.3.

To take full advantage of all the function available in the VIOS on IBM Systems based on POWER6 or POWER7 technology, it is necessary to be at the latest system firmware level. If a system firmware update is necessary, it is recommended that the firmware be updated before you upgrade the VIOS to V2.2.2.3.

Microcode or system firmware downloads for Power Systems

The VIOS update Release V2.2.2.3 includes the IVM code, but it will not be enabled on HMC-managed systems. V2.2.2.3, like all VIOS Update Releases, can be applied to either HMC-managed or IVM-managed VIOS.

Update Release V2.2.2.3 updates your VIOS partition to ioslevel V2.2.2.3. To determine if Update Release 2.2.2.3 is already installed, run the following command from the VIOS command line:
$ ioslevel

If Update Release 2.2.2.3 is installed, the command output is V2.2.2.3

Fixes included in this release

The accumulative list of fixes since 2.2.2.0
APAR Description
IV00549 bosboot may fail during installs if hd5 24MB or less
IV04091 fix updateios
IV27501 Performance improvements for SSP LU listing
IV28067 Data Storage Interrupt when running svm driver
IV32871 Cluster commands give permission denied after VIOS update
IV33966 ZBX: ZHELIX: LSVET -T HIST FAILS WITH IOT/ABORT TRAP" - BAD GET
IV36108 CRON ERROR: COULD NOT GET CLUSTER_ID ATTR FOR VIOSCLUSTER0
IV36546 MBUF LEAK WHEN CAA CLUSTER IS CONFIGURED ON TOP OF A VLANDD
IV37776 UPDATEIOS MISSING QUOTE WHEN CREATING SHELL COMMAND.
IV37832 Possible I/O error after a path failure
IV37920 VEA - INVALID VLAN ID PACKETS FOR VLAN ID > 255
IV38682 PR SHARED TYPE COULD DEFAULT TO THE WRONG VALUE AT CFG TIME
IV38684 LPM MAY FAIL DUE TO SPECIAL CHARACTERS IN UDID STRING
IV38685 MKVDEV LOGS RAS ERROR WITH HYPHENATED LV BACKING DEVICE NAME
IV38686 CLIENT HDISK WILL NOT CONFIGURE WHEN MAPPED WITH MIRRORED=TRUE
IV38701 System crashed in mpio framework config code.
IV38710 Crash in disk driver when I/O hang is detected.
IV38724 ABEND IN GET_ALL_STATS
IV38732 CRASH IN TARGET_RAS_UNREGISTER
IV38733 ETHCHANPROC HOGS CPU DUE TO PENDING SIGNAL
IV38758 VIOSECURE FAILS WITH PREREQ FAIL FOR PREREQCDE OF *LS_XHOST RULE
IV38765 VIOSBR BACKUP FAILURE FOR ADAPTER WITH DUPLICATE LOC CODE
IV38767 LSPV DEBUG MESSAGES NOT LOGGED
IV38771 "LSLV -FREE" GIVES RC=1 IF ANY VG IS VARIED OFF.
IV38775 CAA CLCOMD ENCOUNTERS FALSE CONNECTION CONFLICT ERROR
IV38777 Potential socket corruption on cluster identity messages
IV38778 Disable multicast loopback for mping
IV38781 Nodes crashed with DMS with security enabled
IV38785 clcomd core dump
IV38809 crash at ha_critical_halt during simultaneous clstartstop cmds
IV39150 Client unable to access virtual optical device
IV39514 USAGE OF "AGGREGATION" FLAG IN LACPDU (8023AD ETHERCHANNEL)
IV39853 CLNTOS WRONG IN "LSMAP -NPIV" WHEN CLNTNAME OVER 14 CHARS
IV39854 LSMAP WITH -FMT DIDN'T DISPLAY FULL NAME OF CLIENT PARTITION
IV39860 CAA must not allow disks using old scdisk device driver
IV39861 Crash while down/up, detach/attach reboot in IPV6
IV39862 VIRT OPTICAL DEVICE RESTORE FAILS IF BACKING DEVICE NOT LOADED
IV39863 Crash when using ahafs
IV39900 CLIENT_FAILURE WITH NEWLY CREATED VSCSI CLIENT ADAPTER
IV39901 REDUCEVG HANGS FOR HDISK(SCDISK) WITH DUMP DEVICE
IV39918 coredump on node monitor function
IV39920 VIOS can crash if the vasiop kdb command is run
IV40181 MKBDSP WITH DUPLICATE LU_NAME HAS INCORRECT RETURN CODE
IV40271 Detect and notify network loop condition
IV40278 TEMP Overlapped Command errors encountered on XIV storage.
IV40356 VIOS CRASH WHEN DISABLING SEA ACCOUNTING
IV40441 LSMAP MSG "DETERMINING DEVICE TYPE FAILED." WITH VLOG AS BD
IV40497 vsi configuration not done when vios reboots
IV41171 fix issues for supportng 3rd party disk cluster creation
IV41248 VIOSBR "ERROR: ERROR GETTING CUDV FOR VMLIBRARY_LV"
IV41249 ENTRIES IN "TSD.DAT" AND "PRIVCMDS" DO NOT MATCH
IV41253 clcomd consumers intermittently fail with slow host resolution
IV41472 Possible crash in kexitx after close disk
IV41583 LSPV -SIZE DISK ERRORS ON READ-ONLY LUN
IV41627 Display ndd_2_flags in entstat output.
IV41952 CRASH IN SCSIDISK_BUILD_ERROR DUE TO UNINITIALIZED CMD STRUCT
IV41954 Crash in sfwdProcessSciolEvent on recursive lock
IV41983 Deleting SSP VTDs on stopped node doesn't work properly
IV42178 System crashed with AST.
IV42181 Small possibility of a segfault in cfgscsidisk.
IV42408 CAA KE LOCK CONTENTION CAUSES NETWORK PERFORMANCE DEGRADATION
IV42410 chrepos failure with create_cvg error
IV42611 ALLOWED NPIV PORTS SET TO 2 AFTER GPN_FT
IV42681 CLEAN UP TEMPORARY FILES FROM FCSTAT -CLIENT
IV42976 List suspended adapters displays an incomplete list.
IV42978 clcmd incorrectly exits with return code of 0 on failure
IV43021 LSPV -SIZE DOES NOT WORK FOR SCSI RAID DISKS
IV43022 Cannot get Paging Space from HMC
IV43129 Inability to suspend client after cluster recreate.
IV43130 vioscmd may core dump when removing a node from an SSP cluster
IV43529 lstcpip should not call entstat of interface is not configured
IV43735 error migrating LPAR with NPIV
IV43736 Rare crash during non-block aligned access
IV43738 Rare crash in POF code.
IZ06908 LInux installation using FB optical media fails
IZ33885 System hangs at 517 after installing SDDPCM

Known Capabilities and Limitations

The following requirements and limitations apply to Shared Storage Pool (SSP) features and any associated virtual storage enhancements.

Requirements for Shared Storage Pool

  • Platforms: Power6 and Power7 only (includes Blades), IBM PureFlex Systems (Power Compute Nodes only)
  • System requirements per SSP node:
    • Minimum CPU: 1 CPU of guaranteed entitlement
    • Minimum memory: 4GB
    • Storage requirements per SSP cluster (minimum): 1 fiber-channel attached disk for repository, 1 GB
    • At least 1 fiber-channel attached disk for data, 10GB
  • All storage devices (repository and pool) should be allocated on Hardware RAIDed storage for redundancy.

Limitations for Shared Storage Pool

Software Installation

  • All VIOS nodes must be at version 2.2.1.3 or later.
  • When installing updates for VIOS version 2.2.2.3 participating in a Shared Storage Pool, the Shared Storage Pool Services must be stopped on the node being upgraded.
SSP Configuration
Feature Min Max
Number of VIOS Nodes in Cluster 1 16
Number of Physical Disks in Pool 1 1024
Number of Virtual Disks (LUs) Mappings in Pool 1 8192
Number of Client LPARs per VIOS node 1 200
Capacity of Physical Disks in Pool 10GB 16TB
Storage Capacity of Storage Pool 10GB 512TB
Capacity of a Virtual Disk (LU) in Pool 1GB 4TB
Number of Repository Disks 1 1

  • Maximum number of physical volumes that can be added to or replaced from a pool at one time: 64
  • The Shared Storage Pool cluster name must be less than 63 characters long.
  • The Shared Storage Pool pool name must be less than 127 characters long.
  • The supported LU size is 4TB. Howerver, it is recommended to limit the size of individual luns to 16 GB for optimal performance in cases where all of the following conditions are met:
    • The server generates a random access pattern for the I/O device.
    • There are more than 8 processes concurrently performing I/O.
    • The performance of the application is dependent on the I/O subsystem throughput.

Network Configuration
  • Uninterrupted network connectivity is required for operation. i.e. The network interface used for Shared Storage Pool configuration must be on a highly reliable network which is not congested.
  • A Shared Storage Pool configuration can use IPV4 or IPV6, but not a combination of both.
  • Shared Storage Pools utilizes CAA technology for its clustering technology. CAA requires a multicast network environment to operate.
  • A Shared Storage Pool configuration should configure the TCP/IP resolver routine for name resolution to resolve host names locally first, and then use the DNS. For step by step instructions, refer to the TCP/IP name resolution documentation in the AIX Information Center.
  • The forward and reverse lookup should resolve to the IP address/hostname that is used for Shared Storage Pool configuration.
  • It is recommended that the VIOSs that are part of the Shared Storage Pool configuration keep their clocks synchronized.

Storage Configuration
  • Physical Disks in the SAN Storage subsystem assigned to the Shared Storage Pool cannot be resized.
  • Virtual SCSI devices provisioned from the Shared Storage Pool may drive higher CPU utilization than the classic Virtual SCSI devices.
  • Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
  • High availability SAN solutions should be utilized to mitigate outages.
  • SANCOM will not be supported in a Shared Storage Pool environment.

Shared Storage Pool capabilities and limitations
  • Virtual SCSI disk is the peripheral device type supported by SSP at this time.
  • VIOSs configured for SSP require that Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
  • VIOSs configured for SSP can be used as a Paging Space Partition (PSP), but the storage for the PSP paging spaces must come from logical devices not within a Shared Storage Pool. Using a VIOS SSP logical unit (LU) as an Active Memory Sharing (AMS) paging space or as the suspend/resume file is not supported. Also, Suspend/Resume and Remote restart features for client LPARs backed by VIOS SSP LUs is not supported.
  • When creating Virtual SCSI Adapters for VIOS LPARs, the option "Any client partition can connect" is not supported.

Installation information

Pre-installation information and instructions

Please ensure that your rootvg contains at least 30GB before you attempt to upgrade to VIOS service release 2.2.2.3.

Example: Run the lsvg rootvg command, and then ensure there is enough free space.

If you are planning to update a version of VIOS lower than 2.1, you must first migrate your VIOS to VIOS version 2.1.0 using the Migration DVD. After the VIOS is at version 2.1.0, the Update Release VIOS 2.2.2.1 must be applied to bring the VIOS to the latest VIOS 2.2.2.1 level. The VIOS 2.2.2.3 service pack can then be applied to bring the VIOS to the latest level.

Note: When upgrading to VIOS 2.2.2.3 from levels below 2.2.2.1, you can put the 2.2.2.1 and 2.2.2.3 updates in the same location.

After the VIOS migration is complete, from 1.X to 2.X, you must set the Processor Folding attribute, described here under Migration DVD:

Virtual I/O Server support for Power Systems

While the above process is the most straightforward for users, you should note that with this Update Release version 2.2.2.3, a single boot alternative to this multiple step process is available to NIM users. NIM users can update by creating a single, merged lpp_source that combines the contents of the Migration DVD with the contents of this Update Release 2.2.2.3

A single, merged lpp_source is not supported for VIOS that uses SDDPCM. However, if you use SDDPCM, you can still enable a single boot update by using the alternate method described at the following location:

SDD and SDDPCM migration procedures when migrating VIOS from version 1.x to version 2.x


Before installing the Service Release 2.2.2.3

The update could fail if there is a loaded media repository.

Checking for a loaded media repository

To check for a loaded media repository, and then unload it, follow these steps.

  1. To check for loaded images, run the following command:

    $ lsvopt
    The Media column lists any loaded media.

  2. To unload media images, run the following commands on all Virtual Target Devices that have loaded images.

    $ unloadopt -vtd <file-backed_virtual_optical_device >

  3. To verify that all media are unloaded, run the following command again.

    $ lsvopt
    The command output should show No Media for all VTDs.


Migrate Shared Storage Pool Configuration

The Virtual I/O Server (VIOS) Version 2.2.2.1 or later, supports rolling updates for clusters. The VIOS can be updated to VIOS 2.2.2.3 using rolling updates.

If your current VIOS is configured with Shared Storage Pools from 2.2.1.1 or 2.2.1.3, the following information applies:

A cluster that is created and configured on earlier VIOS Version 2.2.1.1 or 2.2.1.3 must be migrated to version 2.2.1.4 or 2.2.1.5 prior to utilizing rolling updates. This allows the user to keep their shared storage pool devices. When VIOS version is equal or greater than 2.2.1.4 and less than 2.2.2.1, the user needs to download 2.2.2.1 and 2.2.2.3 update images into the same directory. Then update the VIOS to VIOS 2.2.2.3 using rolling updates.

If your current VIOS is configured with Shared Storage Pool from 2.2.1.4 or later on 2.2.1 level, or similar case for 2.2.2.1 or later on 2.2.2 level, the following information applies:

The rolling updates enhancement allows the user to apply Update Release 2.2.2.3 to the VIOS logical partitions in the cluster individually without causing an outage in the entire cluster. The updated logical partitions cannot use the new capabilities until all logical partitions in the cluster are updated and the cluster is upgraded to use the new capabilities.

To upgrade the VIOS logical partitions to use the new capabilities, ensure that the following conditions are met:

  • All VIOS logical partitions must have VIOS Update Release version 2.2.1.4 or later installed. After the update, you can verify that the logical partitions have the new level of software installed by typing the cluster -status -verbose command from the VIOS command line. In the Node Upgrade Status field, if the status of the VIOS logical partition is displayed as UP_LEVEL, the software level in the logical partition is higher than the software level in the cluster. If the status is displayed on ON_LEVEL, the software level in the logical partition and the cluster is the same.
  • All VIOS logical partitions must be running. If any VIOS logical partition in the cluster is not running, the cluster cannot be upgraded to use the new capabilities.

The VIOS SSP software monitors node status and will automatically upgrade the cluster to make use of the new capabilities when all the nodes in the cluster have been updated to support those capabilities.

Installing the Update Release

There is now a method to verify the VIOS update files before installation. This process requires access to openssl by the 'padmin' User, which can be accomplished by creating a link.

To verify the VIOS update files, follow these steps:

$ oem_setup_env
Create a link to openssl
# ln -s /usr/bin/openssl /usr/ios/utils/openssl
Verify the link to openssl was created
# ls -al /usr/ios/utils
# exit

Use one of the following methods to install the latest VIOS Service Release. As with all maintenance, you should create a VIOS backup before making changes.

If you are running a Shared Storage Pool configuration, you must follow the steps in Migrate Shared Storage Pool Configuration.

Note : While running 'updateios' in the following steps, you may see accessauth messages, but these messages can safely be ignored.

Applying updates from a local hard disk

To apply the updates from a directory on your local hard disk, follow these steps.

The current level of the VIOS must be 2.2.2.1

  1. Log in to the VIOS as the user padmin.
  2. If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
  3. If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.

    $ clstartstop -stop -n <cluster_name> -m <hostname>

  4. Create a directory on the Virtual I/O Server.

    $ mkdir <directory_name>

  5. Using ftp, transfer the update file(s) to the directory you created.
  6. Commit previous updates by running the updateios command

    $ updateios -commit

  7. Verify the updates files that were copied. This step can only be performed if the link to openssl was created.

    $ cp <directory_path>/ck_sum.bff /home/padmin
    $ chmod 755 <directory_path>/ck_sum.bff
    $ ck_sum.bff <directory_path>
    If there are missing updates or incomplete downloads, an error message is displayed.

  8. Apply the update by running the updateios command

    $ updateios -accept -install -dev <directory_name>

  9. Run the following command to set authorization for padmin.

    $ swrole - PAdmin

  10. To load all changes, reboot the VIOS as user padmin .

    $ shutdown -restart

  11. If cluster services were stopped in step 3, restart cluster services.

    $ clstartstop -start -n <cluster_name> -m <hostname>

  12. Verify that the update was successful by checking the results of the updateios command and by running the isolevel command, which should indicate that the ioslevel is now 2.2.2.3.

    $ ioslevel

  13. If you need to restore your cluster’s database, follow the steps in Migrate Shared Storage Pool Configuration.

Applying updates from a remotely mounted file system

If the remote file system is to be mounted read-only, follow one of these steps.

The current level of the VIOS must be between 2.2.2.1.

  1. Log in to the VIOS as the user padmin.
  2. If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
  3. If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.
  4. $ clstartstop -stop -n <cluster_name> -m <hostname>

  5. Mount the remote directory onto the Virtual I/O Server.

    $ mount remote_machine_name:directory /mnt

  6. Commit previous updates, by running the updateios command.

    $ updateios -commit

  7. Verify the updates files that were copied, this step can only be performed if the link to openssl was created.

    $ cp /mnt/ck_sum.bff /home/padmin
    $ chmod 755 /home/padmin/ck_sum.bff
    $ ck_sum.bff /mnt
    If there are missing updates or incomplete downloads, an error message is displayed.

  8. Apply the update by running the updateios command.

    $ updateios -accept -install -dev /mnt

  9. Run the following command to set authorization for padmin.

    $ swrole - PAdmin

  10. To load all changes, reboot the VIOS as user padmin .

    $ shutdown -restart

  11. If cluster services were stopped in Step 3, restart cluster services.

    $ clstartstop -start -n <cluster_name> -m <hostname>

  12. Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 2.2.2.3.

    $ ioslevel

Applying updates from the CD/DVD drive

This Update Release can be burned onto a CD by using the ISO image file(s). After the CD has been created, follow these steps.

The current level of the VIOS must be 2.2.2.1.

  1. Log in to the VIOS as the user padmin.
  2. If you use one or more File Backed Optical Media Repositories, you need to unload media images before you apply the Update Release. See details here..
  3. If you use Shared Storage Pools, then Shared Storage Pool Services must be stopped.

    # clstartstop -stop -n <cluster_name> -m <hostname>

  4. Place the CD-ROM into the drive assigned to VIOS.
  5. Commit previous updates, by running the updateios command.

    $ updateios -commit

  6. Apply the update by running the following update command.

    $ updateios -accept -install -dev /dev/cdX
    where X is the device number 0-N assigned to VIOS.

  7. Run the following command to set authorization for padmin.

    $ swrole - PAdmin

  8. To load all changes, reboot the VIOS as user padmin .

    $ shutdown -restart

  9. If cluster services were stopped in step 3, restart cluster services.

    $ clstartstop -start -n <cluster_name> -m <hostname>

  10. Verify that the update was successful by checking the results of the updateios command and by running the ioslevel command, which should indicate that the ioslevel is now 2.2.2.3.

    $ ioslevel

Performing the necessary tasks after installation

Checking for an incomplete installation caused by a loaded media repository

After installing an Update Release, you can use this method to determine if you have encountered the problem of a loaded media library.

Check the Media Repository by running this command:
$ lsrep

If the command reports: "Unable to retrieve repository date due to incomplete repository structure," then you have likely encountered this problem during the installation. The media images have not been lost and are still present in the file system of the virtual media library.

Running the lsvopt command should show the media images.

Recovering from an incomplete installation caused by a loaded media repository

To recover from this type of installation failure, unload any media repository images, and then reinstall the ios.cli.rte package. Follow these steps:

  1. Unload any media images

    $ unloadopt -vtd <file-backed_virtual_optical_device>

  2. Reinstall the ios.cli.rte fileset by running the following commands.

    To escape the restricted shell:
    $ oem_setup_env
    To install the failed fileset:
    # installp –Or –agX ios.cli.rte -d <device/directory>
    To return to the restricted shell:
    # exit

  3. Restart the VIOS.

    $ shutdown –restart

  4. Verify that the Media Repository is operational by running this command:

    $ lsrep

Additional information

NIM installation information

Using NIM to back up and install the VIOS is supported as follows.

  • Always create the SPOT resource directly from the VIOS mksysb image. Do NOT update the SPOT from an LPP_SOURCE.
  • Only the updateios command should be used to update the VIOS. For further assistance, refer to the NIM documentation.

To use NIM, ensure that the NIM Master is at the appropriate level to support the VIOS image. Refer to above Package information section.

Installing the latest version of Tivoli TSM

This release of VIOS contains several enhancements. These enhancements are in the area of POWER virtualization. The following list provides the features of each element by product area.

Note: Version 6.1.0, the previous version of Tivoli TSM, is still shipped and installed from the VIOS installation DVD.

Tivoli TSM version 6.2.2

The Tivoli filesets are now being shipped on the VIOS Expansion Pack, with the required GSKit8 libraries.

The following are sample installation instructions for the new Tivoli TSM filesets:

  1. Insert the VIOS Expansion DVD in to the DVD drive that is assigned to the VIOS partition.
  2. List Contents of the VIOS Expansion DVD.

    $ updateios -list -dev /dev/cd0
    Fileset Name
    GSKit8.gskcrypt32.ppc.rte 8.0.14.7
    GSKit8.gskcrypt64.ppc.rte 8.0.14.7
    GSKit8.gskssl32.ppc.rte 8.0.14.7
    GSKit8.gskssl64.ppc.rte 8.0.14.7
    ..
    tivoli.tsm.client.api.32bit 6.2.2.0 tivoli.tsm.client.api.64bit 6.2.2.0
    ..

  3. Install Tivoli TSM filesets.
  4. $ updateios -fs tivoli.tsm.client.api.32bit -dev /dev/cd0

    Note: Any prerequisite filesets will be pulled in from the Expansion DVD, including for TSM the GSKit8.gskcrypt fileset.

  5. If needed, install additional TSM filesets.

    $ updateios -fs tivoli.tsm.client.ba.32bit -dev /dev/cd0

  6. Verify TSM installed by listing software installed.

    $ lssw
    Sample output:
    ..
    tivoli.tsm.client.api.32bit 6.2.2.0 CF TSM Client - Application Programming
    Interfac