GPFS FAQ's

General questions

Q1.1:
How do I order GPFS?
A1.1:
To order GPFS:
  • To order GPFS on POWER for AIX or Linux (5765-G66), find contact information for your country at http://www.ibm.com/planetwide/
  • To order GPFS for Linux or Windows on x86 Architecture (5765-XA3) (Note: GPFS on x86 Architecture is now available to order in the same IBM fulfillment system as GPFS on POWER), find contact information for your country at http://www.ibm.com/planetwide/
  • To order GPFS for Linux or Windows on x86 Architecture (5724-N94):
Note:
GPFS on x86 Architecture (5724-N94) is a renaming of the previously available GPFS for Linux and Windows Multiplatform offering.

Q1.2:
Where can I find ordering information for GPFS?
A1.2:
You can view ordering information for GPFS in:
  • The Cluster Software Ordering Guide at http://www.ibm.com/systems/clusters/software/reports/order_guide.html
  • The GPFS Announcement Letters Sales Manual at http://www.ibm.com/common/ssi/index.wss
    1. Select your language preference and click Continue.
    2. From Type of content menu, choose Announcement letter and click on the right arrow.
    3. Choose the corresponding product number to enter in the product number field
      • For General Parallel File System for POWER, enter 5765-G66
      • For General Parallel File System x86 Architecture, enter the appropriate order number; either 5724-N94 or 5765-XA3
Q1.3:
How is GPFS priced?
A1.3:
A new pricing, licensing, and entitlement structure for Version 3.2 and follow-on releases of GPFS has been announced:

GPFS has two types of licenses, a Server license and a Client license (licenses are priced per processor core). For each node in a GPFS cluster, the customer determines the appropriate number of GPFS Server licenses or GPFS Client licenses that correspond to the way GPFS is used on that node (a node is defined as one operating system instance on a single computer or running in a virtual partition). For further information, see the related questions below.

Q1.4:
What is a GPFS Server?
A1.4:
You may use a GPFS Server in order to perform the following GPFS functions:
  1. Management functions such as cluster configuration manager, quorum node, manager node, and Network Shared Disk (NSD) server.
  2. Sharing data directly through any application, service protocol, or method, such as Network File System (NFS), Common Internet File System (CIFS), File Transfer Protocol (FTP), or Hypertext Transfer Protocol (HTTP).
Q1.5:
What is a GPFS Client?
A1.5:
You may use a GPFS Client in order to exchange data between nodes that locally mount the same GPFS file system.
Note:
A GPFS Client may not be used for nodes to share GPFS data directly through any application, service, protocol, or method, such as NFS, CIFS, FTP, or HTTP. For this use, entitlement to a GPFS Server is required.
Q1.6:
I am an existing customer, how does the new pricing affect my licenses and entitlements?
A1.6:
Prior to renewal, the customer must identify the actual number of GPFS Client licenses and GPFS Server licences required for their configuration based on the usage defined in the questions What is a GPFS Client ? and What is a GPFS Server ? A customer with a total of 50 entitlements for GPFS will maintain 50 entitlements. Those entitlements will be split between GPFS Servers and GPFS Clients depending upon the required configuration. For existing customers renewing entitlements, they must contact their IBM representatives to migrate their current licenses to the GPFS Server and GPFS Client model. For existing x86 Architecture Passport Advantage customers, your entitlements will have been migrated to the new GPFS Server and GPFS Client model prior to the renewal date. However, you will need to review and adjust those entitlements at the time of your renewal.
Q1.7: What are some examples of the new pricing structure?

A1.7:

GPFS is orderable through multiple methods at IBM. One of these uses PVU's and the other uses small, medium and large. Your IBM sales representative can help you determine which method is appropriate for your situation.

Pricing examples include:

GPFS for POWER (5765-G66) and GPFS on x86 Architecture (5765-XA3)

Licenses continue to be priced per processor core.

Common small commercial Power Systems™ cluster where virtualization is used:

  • You have a cluster that consists of four Power 570 systems. Each system has eight processor cores per physical system and is partitioned into two LPARs with four processor cores per LPAR for a total of 8 LPARS running GPFS. All of the nodes access the disk through a SAN.
  • Three of the LPARs are configured as quorum nodes. Since these nodes are running GPFS management tasks (i.e. quorum) they require a GPFS Server license. Three nodes with four CPUs each means you will need 12 Server licenses.
  • Five of the LPARs are configured as non-quorum nodes. These nodes do not run GPFS management tasks. So five nodes with four CPUs each means you will need 20 Client licenses.
Table 2. x86 Architecture processor tier values
Processor VendorProcessor BrandProcessor Model NumberProcessor Tier
IntelXeon (Nehalem EX)
7500 to 7599
6500 to 6599
>=4 sockets per server =  large
2 sockets per server = medium

Xeon (Nehalem EP)
3400 to 3599
5500 to 5699
medium

Xeon (pre-Nehalem)
3000 to 3399
5000 to 5499
7000 to 7499
small
AMDOpteronall existingsmall
ANYAny single-core (i.e. Xeon Single-Core)all existinglarge

GPFS on x86 Architecture (5724-N94)

Licenses continue to be priced per 10 Processor Value Units (PVU). For example, 1AMD Opteron core requires 50 PVUs

PROCESSOR VALUE UNIT

PVU is the unit of measure by which this program is licensed. PVU entitlements are based on processor families (vendors and brands). A Proof of Entitlement (PoE) must be obtained for the appropriate number of PVUs based on the level or tier of all processor cores activated and available for use by the Program on the server. Some programs allow licensing to less than the full capacity of the servers activated processor cores, sometimes referred to as sub-capacity licensing. For programs which offer sub-capacity licensing, if a server is partitioned utilizing eligible partitioning technologies, then a PoE must be obtained for the appropriate number of PVUs based on all activated processor cores available for use in each partition where the program runs or is managed by the program. Refer to the International Passport Advantage Agreement Attachment for Sub-Capacity Terms or the programs License Information to determine applicable sub-capacity terms. The PVU entitlements are specific to the program and may not be exchanged, interchanged, or aggregated with PVU entitlements of another program.

For general overview of PVUs for processor families (vendors and brands), go tohttp://www.ibm.com/software/lotus/passportadvantage/pvu_licensing_for_customers.html

To calculate the exact PVU entitlements required for the program, go to https://www-112.ibm.com/software/howtobuy/passportadvantage/valueunitcalculator/vucalc.wss

Common System x HPC setup with no virtualization:

  • You have four x3655 systems with eight cores each. In addition you have 32 x3455 systems each with four processor cores. Each physical machine is a GPFS node (no virtualization).
  • The four x3655 nodes are configured as NSD servers and quorum nodes. Therefore they are serving data and providing GPFS management services so they require a GPFS Server license. Four nodes each with eight AMD Opteron cores means you have a total of 32 cores. Each AMD Opteron core is worth 50 PVUs and each Server license is worth 10 PVUs so you will need 160 GPFS Server licenses. (32 AMD Opteron cores*50 PVUs)/10 PVUs per Server License = 160 GPFS Server licenses.
  • The 32 x3455 nodes are all configured as NSD clients. So you have 32 nodes each with four cores for a total of 128 cores. Each AMD Opteron core is worth 50 PVUs and each Client license is worth 10 PVUs so you will need 640 GPFS Client licenses. (128 AMD Opteron cores*50 PVUs)/10 PVUs per Client license = 640 client licenses.

For further information contact:

  • gpfs@us.ibm.com
  • In the United States, please call             1-888-SHOP-IBM      
  • In all other locations, please contact your IBM Marketing Representative. For a directory of worldwide contact, seewww.ibm.com/planetwide/index.html
Q1.8:
How do I determine the number of licenses required in a virtualization environment?
A1.8:
The number of processors for which licenses are required for is the smaller of the following:
  • The total number of activated processors in the machine
  • Or
    1. When GPFS nodes are in partitions with dedicated processors, then licenses are required for the number of processors dedicated to those partitions.
    2. When GPFS nodes are LPARs that are members of a shared processing pool, then licenses are required for the smaller of:
      • the number of processors assigned to the pool or
      • the sum of the virtual processors of each uncapped partition plus the processors in each capped partition

When the same processors are available to both GPFS Server nodes and GPFS Client nodes, GPFS Server licenses are required for those processors.

Any fractional part of a processor in the total calculation must be rounded up to a full processor.

Examples:

  1. One GPFS node is in a partition with .5 of a dedicated processor -> license(s) are required for 1 processor
  2. 10 GPFS nodes are in partitions on a machine with a total of 5 activated processors -> licenses are required for 5 processors
  3. LPAR A is a GPFS node with an entitled capacity of say, 1.5 CPUs is set to uncapped in a processor pool of 5 processors.

    LPAR A is used in a way that requires server licenses.

    LPAR B is a GPFS node that is on the same machine as LPAR A and is also part of the shared processor pool as LPAR A.

    LPAR B is used in a way that does not require server licenses so client licenses are sufficient.

    LPAR B has an entitled capacity of 2 CPUs, but since it too is uncapped, it can use up to 5 processors out of the pool.

    For this configuration server licenses are required for 5 processors.

Q1.9:
Where can I find the documentation for GPFS?
A1.9:
The GPFS documentation is available in both PDF and HTML format on the Cluster Information Center atpublib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html.
Q1.10:
What resources beyond the standard documentation can help me learn about and use GPFS?
A1.10:
For additional information regarding GPFS see:

Q1.11:
How can I ask a more specific question about GPFS?
A1.11:
Depending upon the nature of your question, you may ask it in one of several ways.

If your question does not fall into the above categories, you can send a note directly to the GPFS development team at gpfs@us.ibm.com. However, this mailing list is informally monitored as time permits and should not be used for priority messages to the GPFS team.

Q1.12:
Does GPFS participate in the IBM Academic Initiative Program?
A1.12:
GPFS no longer participates in the IBM Academic Initiative Program.

If you are currently using GPFS with an education license from the Academic Initiative, we will continue to support GPFS 3.2 on a best-can-do basis via email for the licenses you have. However, no additional or new licenses of GPFS will be available from the IBM Academic Initiative program. You should work with your IBM client representative on what educational discount may be available for GPFS. Seewww.ibm.com/planetwide/index.html

Software questions

Q2.1:
What levels of the AIX O/S are supported by GPFS?
A2.1:
Table 3. GPFS for AIX
GPFSAIX V7.1AIX V6.1AIX V5.3AIX V5.2
GPFS V3.4X
(GPFS 3.4.0-2, or later)
XX
GPFS V3.3X
(GPFS 3.3.0-10, or later)
XX
GPFS V3.2X
(GPFS 3.2.1-24, or later)
XXX
Notes:
  1. The following additional filesets are required by GPFS:
    • xlC.aix50.rte (C Set ++ Runtime for AIX 5.0), version 8.0.0.0 or later
    • xlC.rte (C Set ++ Runtime), version 8.0.0.0 or later
    These can be downloaded from Fix Central at http://www.ibm.com/eserver/support/fixes/fixcentral
  2. Enhancements to the support of Network File System (NFS) V4 in GPFS are only available on AIX V5.3 systems with the minimum technology level of 5300-04 applied, AIX V6.1 or AIX V7.1.
  3. The version of OpenSSL shipped with some versions of AIX 7.1, AIX V6.1 and AIX V5.3 will not work with GPFS due to a change in how the library is built. To obtain the level of OpenSSL which will work with GPFS, see the question How do I get OpenSSL to work on AIX?
  4. Service is required for GPFS to work with some levels of AIX, please see the question What are the current advisories for GPFS on AIX?
Q2.2:
What Linux distributions are supported by GPFS?
A2.2:
GPFS supports the following distributions:
Table 4. Linux distributions supported by GPFS

RHEL 6RHEL 5RHEL 4SLES 11SLES 10SLES 9
GPFS for Linux on x86 Architecture





|V3.4|X |
(GPFS V3.4.0-2, 
|or later)
|
|X|X |
(GPFS V3.4.0-3, 
|or later)
|
|X|X|
V3.3X
(GPFS V3.3.0-9, 
or later)
XXXXX
| V3.2|X |
(GPFS V3.2.1-24, 
|or later) 
|
|X|X|X|X|X
GPFS for Linux on POWER





|V3.4|X |
(GPFS V3.4.0-2, 
|or later)
|
|X|X |
(GPFS V3.4.0-3, 
|or later)
|
|X|X|
V3.3X
(GPFS V3.3.0-9, 
or later)
XXXXX
| V3.2|X |
(GPFS V3.2.1-24, 
|or later) 
|
|X|X|X|X|X

Please also see the questions:

Q2.3:
What are the latest kernel levels that GPFS has been tested with?
A2.3:
While GPFS runs with many different AIX fixes and Linux kernel levels, it is highly suggested that customers apply the latest fix levels and kernel service updates for their operating system. To download the latest GPFS service updates, go to the GPFS page on Fix Central

Please also see the questions:

Note:
GPFS for Linux on Itanium Servers is available only through a special Programming Request for Price Quotation (PRPQ). The install image is not generally available code. It must be requested by an IBM client representative through the RPQ system and approved before order fulfillment. If interested in obtaining this PRPQ, reference PRPQ # P91232 or Product ID 5799-GPS.
Table 5. GPFS for Linux RedHat support
RHEL DistributionLatest Kernel Level TestedMinimum GPFS Level
|6.0|2.6.32-71|GPFS V3.4.0-2 / V3.3.0-9/ |V3.2.1-24
5.52.6.18-194GPFS V3.4.0-1 / V3.3.0-5 / V3.2.1-20
5.42.6.18-164GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1
5.32.6.18-128GPFS V3.4.0-1 / V3.3.0-1/ V3.2.1-1
5.22.6.18-92.1.10GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1
|4.8|2.6.9-89|GPFS V3.4.0-3/ V3.3.0-1 / |V3.2.1-1
|4.7|2.6.9-78|GPFS V3.4.0-3/ V3.3.0-1 / |V3.2.1-1
|4.6|2.6.9-67.0.7|GPFS V3.4.0-3/ V3.3.0-1 / |V3.2.1-1
Table 6. GPFS for Linux SLES support
SLES DistributionLatest Kernel Level TestedMinimum GPFS Level
SLES 11 SP12.6.32.12-0.7.1GPFS V3.4.0-1 / V3.3.0-7 / V3.2.1-24
SLES 112.6.27.19-5GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-13
SLES 10 SP32.6.16.60-0.59.1GPFS V3.4.0-1 / V3.3.0-5 / V3.2.1-18
SLES 10 SP22.6.16.60-0.27GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1
SLES 10 SP12.6.16.53-0.8GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1
SLES 102.6.16.21-0.25GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1
SLES 9 SP42.6.5-7.312GPFS V3.3.0-1 / V3.2.1-1
SLES 9 SP32.6.5-7.286GPFS 3.3.0-1 / V3.2.1-1
Table 7. GPFS for Linux Itanium support
DistributionLatest Kernel Level TestedMinimum GPFS Level
RHEL 4.52.6.9-55.0.6GPFS 3.4.0-1 / V3.3.0-1 / V3.2.1-1
SLES 10 SP12.6.16.53-0.8
SLES 9 SP32.6.5-7.286
Q2.4:
What are the current restrictions on GPFS Linux kernel support?
A2.4:
Current restriction on GPFS Linux kernel support include:
  • GPFS does not support any Linux environments with SELinux enabled in enforcement mode.
  • GPFS has the following restrictions on RHEL support:
    • GPFS does not currently support the Transparent Huge Page (THP) feature available in RHEL 6.0.. This support should be disabled at boot time by appending transparent_hugepage=never to the kernel boot options.
    • GPFS does not currently support the following kernels:
      • RHEL hugemem
      • RHEL largesmp
      • RHEL uniprocessor (UP)
    • |GPFS V3.4.0-2, 3.3.0-9, 3.2.1-24, |or later supports RHEL 6.0 |

      When installing the GPFS 3.3 base RPMs |on RHEL 6, a symbolic link /usr/bin/ksh to /bin/ksh is required to |satisfy the /usr/bin/ksh dependency.

    • |GPFS V3.4.0.3 or later supports RHEL 4.
    • GPFS V3.3.0-5 or later supports RHEL 5.5
    • GPFS V3.2.1-20 or later supports RHEL 5.5
    • RHEL 5.0 and later on POWER requires GPFS V3.2.0.2 or later
    • RHEL5.1, the automount option is slow; this issue should be addressed in the 2.6.18-53.1.4 kernel.
    • RedHat Kernel 2.6.18-164.11.1 or later requires hot fix package for BZ567479. Please contact RedHat support.
  • GPFS has the following restrictions on SLES support:
    • GPFS V3.3.0-7 or later supports SLES 11 SP1.
    • GPFS V3.3.0-5 or later supports SLES 10 SP3.
    • GPFS V3.3 supports SLES 9 SP 3 or later.
    • GPFS V3.2.1-24 or later, supports SLES 11 SP1.
    • GPFS V3.2.1.13 or later supports SLES 11.
    • GPFS V3.2.1-10 or later supports the SLES10 SP2 2.6.16.60-0.34-bigsmp i386 kernel.
    • GPFS V3.2.1-18 or later supports SLES 10 SP3.
    • GPFS does not support SLES 10 SP3 on POWER 4 machines.
    • The GPFS 3.2 GPL build requires imake. If imake was not installed on the SLES 10 or SLES 11 system, install xorg-x11-devel-*.rpm.
    • There is required service for support of SLES 10. Please see question What is the current service information for GPFS?
  • GPFS for Linux on POWER does not support mounting of a file system with a 16KB block size when running on either RHEL 5 or SLES 11.
  • There is service required for Linux kernels 2.6.30 or later, or on RHEL5.4 (2.6.18-164.11.1.el5). Please see question What are the current advisories for GPFS on Linux?

Please also see the questions:

Q2.5:
Is GPFS on Linux supported in a virtualization environment?
A2.5:
You can install GPFS on a virtualization server or on a virtualized guest OS. When running GPFS on a guest OS the guest must be an OS version that is supported by GPFS and run as an NSD client. GPFS on Linux is supported in the following virtualization environments installed on the virtualization servers:
  1. GPFS V3.2.1-3, GPFS V3.3.0-7 and V3.4, or later, support the RHEL xen kernel for NSD clients only.
  2. GPFS V3.2.1-21, V3.3.0-7, and V3.4, or later, support the SLES xen kernel for NSD clients only.
  3. GPFS has been tested with VMware ESX 3.5 for NSD clients only and is supported on all Linux distros that are supported by both VMware and GPFS.
  4. GPFS has been tested with guests on RHEL 6.0 KVM hosts for NSD clients only and is supported on all Linux distros that are supported by both RHEL 6.0 kvm host and GPFS.
Q2.6:
What levels of the Windows O/S are supported by GPFS?
A2.6:
Table 8. Windows O/S support

Windows Server 2003 R2 x64Windows Server 2008 x64 (SP 2)Windows Server 2008 R2
GPFS V3.4
XX
GPFS V3.3
XX
GPFS V3.2.1-5X

Also see the questions:

  1. What are the limitations of GPFS support for Windows ?
  2. What are the current advisories for all platforms supported by GPFS?
  3. What are the current advisories for GPFS on Windows?
Q2.7:
What are the limitations of GPFS support for Windows ?
A2.7:
Current® limitations include:
  • GPFS for Windows does not support a file system feature called Directory Change Notification. This limitation can have adverse effects when GPFS files are exported using Windows files sharing. In detail, the issue relates to the SMB2 protocol used on Windows Vista and later operating systems. Because GPFS does not support Directory Change Notification, the SMB2 redirector cache on the client will not see cache invalidate operations if metadata is changed on the server or on another client. The SMB2 client will continue to see its cached version of the directory contents until the redirector cache expires. Hence, client systems may see an inconsistent view of the GPFS namespace. A workaround for this limitation is to disable the SMB2 protocol on the server. This will ensure that the SMB2 is not used even if the client is SMB2 capable. To disable SMB2, follow the instructions under the "MORE INFORMATION" section at http://support.microsoft.com/kb/974103
  • In GPFS homogeneous Windows clusters (GPFS V3.4 or later), the Windows nodes can perform most of the management and administrative operations. The exceptions include:
    • Certain GPFS commands to apply policy, administer quotas and ACLs.
    • Support for the native Windows Backup utility.

    Please refer to the GPFS Concepts, Planning and Installation Guide for a full list of limitations.

  • Tivoli Storage Manager (TSM) Backup Archive 6.2 client is only verified to work with GPFS V3.3. See the TSM Client Functional Compatibility Table at http://www-01.ibm.com/support/docview.wss?uid=swg21420322.
  • There is no migration path from Windows Server 2003 R2 (GPFS V3.2.1-5 or later) to Windows Server 2008 (GPFS V3.3).

    To move GPFS V3.2.1.5 or later Windows nodes to GPFS V3.3:

    1. Remove all the Windows nodes from your cluster.
    2. Uninstall GPFS 3.2.1.5 from your Windows nodes. This step is not necessary if you are reinstalling Windows Server 2008 from scratch (next step below) and not upgrading from Server 2003 R2.
    3. Install Windows Server 2008 and the required prerequisites on the nodes.
    4. Install GPFS 3.3 on the Windows Server 2008 nodes.
    5. Migrate your AIX and Linux nodes from GPFS 3.2.1-5 or later, to GPFS V3.3.
    6. Add the Windows nodes back to your cluster.
    Note:
    See the GPFS documentation at http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html for details on uninstalling, installing and migrating GPFS.
  • Windows only supports the DEFAULT and AUTHONLY ciphers.
  • A DMAPI-enabled file systems may not be mounted on a Windows node.
Q2.8:
What are the requirements for the use of OpenSSH on Windows nodes?
A2.8:
GPFS uses the SUA Community version of OpenSSH to support its administrative functions when the cluster includes Windows nodes and UNIX nodes. Microsoft does not provide SSH support in the SUA Utilities and SDK, and the remote shell service included with SUA has limitations that make it unsuitable for GPFS. Interop Systems Inc. hosts the SUA Community Web site (http://www.interopsystems.com/community/), which includes a forum and other helpful resources related to SUA and Windows/UNIX interoperability. Interop Systems also provides SUA Add-on Bundles that include OpenSSH (http://www.suacommunity.com/tool_warehouse.aspx) and many other packages; however, IBM recommends installing only the SUA Community packages that your environment requires. The steps below outline a procedure for installing OpenSSH. This information could change at any time. Refer to the Interop Community Forums (http://www.suacommunity.com/forum/default.aspx) for the current and complete installation instructions:
  1. Download the Bootstrap Installer (6.0/x64) from Package Install Instructions (http://www.suacommunity.com/pkg_install.htm) and install it on your Windows nodes.
  2. From an SUA shell, run pkg_update -L openssh
  3. Log on as root and run regpwd from an SUA shell.

Complete the procedure as noted in the GPFS Concepts, Planning, and Installation Guide under the heading "Installing and configuring OpenSSH".

Q2.9:
Can different GPFS maintenance levels coexist?
A2.9:
Certain levels of GPFS can coexist, that is, be active in the same cluster and simultaneously access the same file system. This allows for rolling upgrades of GPFS nodes within a cluster. Further it allows the mounting of GPFS file systems from other GPFS clusters that may be running a different maintenance level of GPFS. The current maintenance level coexistence rules are:
  • All GPFS V3.4 maintenance levels can coexist with each other and with GPFS V3.3 Maintenance Levels, unless otherwise stated in this FAQ.
  • All GPFS V3.3 maintenance levels can coexist with each other and with GPFS V3.2 Maintenance Levels, unless otherwise stated in this FAQ.
  • All GPFS V3.2 maintenance levels can coexist with each other, unless otherwise stated in this FAQ.

    See the Migration, coexistence and compatibility information in the GPFS V3.2 Concepts, Planning, and Installation Guide

    • The default file system version was incremented in GPFS 3.2.1-5. File systems created using GPFS v3.2.1.5 code without using the --version option of the mmcrfs command will not be mountable by earlier code.
    • GPFS V3.2 maintenance levels 3.2.1.2 and 3.2.1.3 have coexistence issues with other maintenance levels.

      Customers using a mixed maintenance level cluster that have some nodes running 3.2.1.2 or 3.2.1.3 and other nodes running other maintenance levels should uninstall the gpfs.msg.en_US rpm/fileset from the 3.2.1.2 and 3.2.1.3 nodes. This should prevent the wrong message format strings going across the mixed maintenance level nodes.

    • Attention: Do not use the mmrepquota command if there are nodes in the cluster running a mixture of 3.2.0.3 and other maintenance levels. A fix is provided in APAR #IZ16367.
Q2.10:
Are there any requirements for Clustered NFS (CNFS) support in GPFS?
A2.10:
GPFS supports Clustered NFS (CNFS) on SLES 11, SLES 10, SLES 9, RHEL 5 and RHEL 4. However there are limitations:
Table 9. CNFS requirements

lockd patch requiredsm-notify requiredrpc.statd required
SLES 10 SP1 and priorXXnot required
SLES 9XXnot required
RHEL 5.1 and priorX (not available for ppc64)included in base distributionX
RHEL 4X (not available for ppc64)included in base distributionX
See also What Linux kernel patches are provided for clustered file systems such as GPFS?
Q2.11:
Does GPFS support NFS V4?
A2.11:
Enhancements to the support of Network File System (NFS) V4 in GPFS are available on
  • AIX V5.3 systems with the minimum technology level of 5300-04 applied, AIX V6.1 or AIX V7.1.
  • GPFS V3.3 and 3.4 support NFS V4 on the following Linux distributions:
    • RHEL 5.5
    • RHEL 6.0
    • SLES 11 SP1

Restrictions include:

  • Delegations must be disabled if a GPFS file system is exported over Linux/NFSv4 on RHEL5.2 by running echo 0 > /proc/sys/fs/leases-enable on the RHEL5.2 node. Other nodes can continue to grant delegations (for NFSv4) and/or oplocks (for CIFS). On all platforms, only read delegations are supported - there is no impact of this on applications.
  • GPFS cNFS does not support NFSv4.
  • Windows-based NFSv4 clients are not supported with Linux/NFSv4 servers because of their use of share modes.
  • If a file system is to be exported over NFSv4/Linux, then it must be configured to support POSIX ACLs (with -k allor -k posixoption). This is because NFSv4/Linux servers will only handle ACLs properly if they are stored in GPFS as posix ACLs.
  • SLES clients do not support NFSv4 ACLs.
  • Concurrent AIX/NFSv4 servers, Samba servers and GPFS Windows nodes in the cluster are allowed. NFSv4 ACLs may be stored in GPFS filesystems via Samba exports, NFSv4/AIX servers, GPFS Windows nodes, ACL commands of Linux NFSv3 and ACL commands of GPFS. However, clients of Linux v4 servers will not be able to see these ACLs, just the permission from the mode.
Table 10. Readiness of NFSv4 support on different Linux distros with some patches

Redhat 5.5 and 6.0Sles 11 SP1
Byte-range lockingYesYes
Read DelegationYesYes
ACLsYesYes as a server. No as a client.

|For more information on the support of NFS V4, |please see the GPFS documentation updates file athttp://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html

Q2.12:
Are there any requirements for the use of the Persistent Reserve support in GPFS?
A2.12:
GPFS support for Persistent Reserve on AIX requires:
  • For GPFS V3.2 on AIX 5L™ V5.2, APAR IZ00673
  • For GPFS V3.2, V3.3, or V3.4 on AIX 5L V5.3, APARS IZ01534, IZ04114, and IZ60972
  • For GPFS V3.2, V3.3 , or V3.4 on AIX V6.1, APAR IZ57224

Q2.13:
Are there any considerations when utilizing the Simple Network Management Protocol (SNMP)-based monitoring capability in GPFS?
A2.13:
Considerations for the use of the SNMP-based monitoring capability in GPFS V3.2, V3.3 and V3.4 include:
  • The SNMP collector node must be a Linux node in your GPFS cluster. GPFS utilizes Net-SNMP which is not supported by AIX.
  • Support for ppc64 requires the use of Net-SNMP 5.4.1. Binaries for Net-SNMP 5.4.1 on ppc64 are not available. You will need to download the source and build the binary. Go to http://net-snmp.sourceforge.net/download.html
  • If the monitored cluster is relatively large, you need to increase the communication time-out between the SNMP master agent and the GPFS SNMP subagent. In this context, a cluster is considered to be large if the number of nodes is greater than 25, or the number of file systems is greater than 15, or the total number of disks in all file systems is greater than 50. For more information see Configuring Net-SNMP in the GPFS: Advanced Administration Guide.
Comments