- How do I order GPFS?
- To order GPFS:
- To order GPFS on POWER for AIX or Linux (5765-G66), find contact information for your country at http://www.ibm.com/planetwide/
- To order GPFS for Linux or Windows on x86 Architecture (5765-XA3) (Note: GPFS on x86 Architecture is now available to order in the same IBM fulfillment system as GPFS on POWER), find contact information for your country at http://www.ibm.com/planetwide/
- To order GPFS for Linux or Windows on x86 Architecture (5724-N94):
GPFS on x86 Architecture (5724-N94) is a renaming of the previously available GPFS for Linux and Windows Multiplatform offering.
- Where can I find ordering information for GPFS?
- You can view ordering information for GPFS in:
- The Cluster Software Ordering Guide at http://www.ibm.com/systems/clusters/software/reports/order_guide.html
- The GPFS Announcement Letters Sales Manual at http://www.ibm.com/common/ssi/index.wss
- Select your language preference and click Continue.
- From Type of content menu, choose Announcement letter and click on the right arrow.
- Choose the corresponding product number to enter in the product number field
- For General Parallel File System for POWER, enter 5765-G66
- For General Parallel File System x86 Architecture, enter the appropriate order number; either 5724-N94 or 5765-XA3
- How is GPFS priced?
- A new pricing, licensing, and entitlement structure for Version 3.2 and follow-on releases of GPFS has been announced:
GPFS has two types of licenses, a Server license and a Client license (licenses are priced per processor core). For each node in a GPFS cluster, the customer determines the appropriate number of GPFS Server licenses or GPFS Client licenses that correspond to the way GPFS is used on that node (a node is defined as one operating system instance on a single computer or running in a virtual partition). For further information, see the related questions below.
- What is a GPFS Server?
- You may use a GPFS Server in order to perform the following GPFS functions:
- Management functions such as cluster configuration manager, quorum node, manager node, and Network Shared Disk (NSD) server.
- Sharing data directly through any application, service protocol, or method, such as Network File System (NFS), Common Internet File System (CIFS), File Transfer Protocol (FTP), or Hypertext Transfer Protocol (HTTP).
- What is a GPFS Client?
- You may use a GPFS Client in order to exchange data between nodes that locally mount the same GPFS file system.
A GPFS Client may not be used for nodes to share GPFS data directly through any application, service, protocol, or method, such as NFS, CIFS, FTP, or HTTP. For this use, entitlement to a GPFS Server is required.
- I am an existing customer, how does the new pricing affect my licenses and entitlements?
- Prior to renewal, the customer must identify the actual number of GPFS Client licenses and GPFS Server licences required for their configuration based on the usage defined in the questions What is a GPFS Client ? and What is a GPFS Server ? A customer with a total of 50 entitlements for GPFS will maintain 50 entitlements. Those entitlements will be split between GPFS Servers and GPFS Clients depending upon the required configuration. For existing customers renewing entitlements, they must contact their IBM representatives to migrate their current licenses to the GPFS Server and GPFS Client model. For existing x86 Architecture Passport Advantage customers, your entitlements will have been migrated to the new GPFS Server and GPFS Client model prior to the renewal date. However, you will need to review and adjust those entitlements at the time of your renewal.
- Q1.7: What are some examples of the new pricing structure?
GPFS is orderable through multiple methods at IBM. One of these uses PVU's and the other uses small, medium and large. Your IBM sales representative can help you determine which method is appropriate for your situation.
Pricing examples include:
GPFS for POWER (5765-G66) and GPFS on x86 Architecture (5765-XA3)
Licenses continue to be priced per processor core.
Common small commercial Power Systems™ cluster where virtualization is used:
- You have a cluster that consists of four Power 570 systems. Each system has eight processor cores per physical system and is partitioned into two LPARs with four processor cores per LPAR for a total of 8 LPARS running GPFS. All of the nodes access the disk through a SAN.
- Three of the LPARs are configured as quorum nodes. Since these nodes are running GPFS management tasks (i.e. quorum) they require a GPFS Server license. Three nodes with four CPUs each means you will need 12 Server licenses.
- Five of the LPARs are configured as non-quorum nodes. These nodes do not run GPFS management tasks. So five nodes with four CPUs each means you will need 20 Client licenses.
Table 2. x86 Architecture processor tier values
|Processor Vendor||Processor Brand||Processor Model Number||Processor Tier|
|Intel||Xeon (Nehalem EX)|
7500 to 7599
6500 to 6599
>=4 sockets per server = large
2 sockets per server = medium
|Xeon (Nehalem EP)|
3400 to 3599
5500 to 5699
3000 to 3399
5000 to 5499
7000 to 7499
|ANY||Any single-core (i.e. Xeon Single-Core)||all existing||large|
GPFS on x86 Architecture (5724-N94)
Licenses continue to be priced per 10 Processor Value Units (PVU). For example, 1AMD Opteron core requires 50 PVUs
PROCESSOR VALUE UNIT
PVU is the unit of measure by which this program is licensed. PVU entitlements are based on processor families (vendors and brands). A Proof of Entitlement (PoE) must be obtained for the appropriate number of PVUs based on the level or tier of all processor cores activated and available for use by the Program on the server. Some programs allow licensing to less than the full capacity of the servers activated processor cores, sometimes referred to as sub-capacity licensing. For programs which offer sub-capacity licensing, if a server is partitioned utilizing eligible partitioning technologies, then a PoE must be obtained for the appropriate number of PVUs based on all activated processor cores available for use in each partition where the program runs or is managed by the program. Refer to the International Passport Advantage Agreement Attachment for Sub-Capacity Terms or the programs License Information to determine applicable sub-capacity terms. The PVU entitlements are specific to the program and may not be exchanged, interchanged, or aggregated with PVU entitlements of another program.
For general overview of PVUs for processor families (vendors and brands), go tohttp://www.ibm.com/software/lotus/passportadvantage/pvu_licensing_for_customers.html
To calculate the exact PVU entitlements required for the program, go to https://www-112.ibm.com/software/howtobuy/passportadvantage/valueunitcalculator/vucalc.wss
Common System x HPC setup with no virtualization:
- You have four x3655 systems with eight cores each. In addition you have 32 x3455 systems each with four processor cores. Each physical machine is a GPFS node (no virtualization).
- The four x3655 nodes are configured as NSD servers and quorum nodes. Therefore they are serving data and providing GPFS management services so they require a GPFS Server license. Four nodes each with eight AMD Opteron cores means you have a total of 32 cores. Each AMD Opteron core is worth 50 PVUs and each Server license is worth 10 PVUs so you will need 160 GPFS Server licenses. (32 AMD Opteron cores*50 PVUs)/10 PVUs per Server License = 160 GPFS Server licenses.
- The 32 x3455 nodes are all configured as NSD clients. So you have 32 nodes each with four cores for a total of 128 cores. Each AMD Opteron core is worth 50 PVUs and each Client license is worth 10 PVUs so you will need 640 GPFS Client licenses. (128 AMD Opteron cores*50 PVUs)/10 PVUs per Client license = 640 client licenses.
For further information contact:
- How do I determine the number of licenses required in a virtualization environment?
- The number of processors for which licenses are required for is the smaller of the following:
- The total number of activated processors in the machine
- When GPFS nodes are in partitions with dedicated processors, then licenses are required for the number of processors dedicated to those partitions.
- When GPFS nodes are LPARs that are members of a shared processing pool, then licenses are required for the smaller of:
- the number of processors assigned to the pool or
- the sum of the virtual processors of each uncapped partition plus the processors in each capped partition
When the same processors are available to both GPFS Server nodes and GPFS Client nodes, GPFS Server licenses are required for those processors.
Any fractional part of a processor in the total calculation must be rounded up to a full processor.
- One GPFS node is in a partition with .5 of a dedicated processor -> license(s) are required for 1 processor
- 10 GPFS nodes are in partitions on a machine with a total of 5 activated processors -> licenses are required for 5 processors
- LPAR A is a GPFS node with an entitled capacity of say, 1.5 CPUs is set to uncapped in a processor pool of 5 processors.
LPAR A is used in a way that requires server licenses.
LPAR B is a GPFS node that is on the same machine as LPAR A and is also part of the shared processor pool as LPAR A.
LPAR B is used in a way that does not require server licenses so client licenses are sufficient.
LPAR B has an entitled capacity of 2 CPUs, but since it too is uncapped, it can use up to 5 processors out of the pool.
For this configuration server licenses are required for 5 processors.
- Where can I find the documentation for GPFS?
- The GPFS documentation is available in both PDF and HTML format on the Cluster Information Center atpublib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html.
- What resources beyond the standard documentation can help me learn about and use GPFS?
- For additional information regarding GPFS see:
- How can I ask a more specific question about GPFS?
- Depending upon the nature of your question, you may ask it in one of several ways.
If your question does not fall into the above categories, you can send a note directly to the GPFS development team at email@example.com. However, this mailing list is informally monitored as time permits and should not be used for priority messages to the GPFS team.
- Does GPFS participate in the IBM Academic Initiative Program?
- GPFS no longer participates in the IBM Academic Initiative Program.
If you are currently using GPFS with an education license from the Academic Initiative, we will continue to support GPFS 3.2 on a best-can-do basis via email for the licenses you have. However, no additional or new licenses of GPFS will be available from the IBM Academic Initiative program. You should work with your IBM client representative on what educational discount may be available for GPFS. Seewww.ibm.com/planetwide/index.html
- What levels of the AIX O/S are supported by GPFS?
Table 3. GPFS for AIX
|GPFS||AIX V7.1||AIX V6.1||AIX V5.3||AIX V5.2|
(GPFS 3.4.0-2, or later)
(GPFS 3.3.0-10, or later)
(GPFS 3.2.1-24, or later)
- The following additional filesets are required by GPFS:
These can be downloaded from Fix Central at http://www.ibm.com/eserver/support/fixes/fixcentral
- xlC.aix50.rte (C Set ++ Runtime for AIX 5.0), version 22.214.171.124 or later
- xlC.rte (C Set ++ Runtime), version 126.96.36.199 or later
- Enhancements to the support of Network File System (NFS) V4 in GPFS are only available on AIX V5.3 systems with the minimum technology level of 5300-04 applied, AIX V6.1 or AIX V7.1.
- The version of OpenSSL shipped with some versions of AIX 7.1, AIX V6.1 and AIX V5.3 will not work with GPFS due to a change in how the library is built. To obtain the level of OpenSSL which will work with GPFS, see the question How do I get OpenSSL to work on AIX?
- Service is required for GPFS to work with some levels of AIX, please see the question What are the current advisories for GPFS on AIX?
- What Linux distributions are supported by GPFS?
- GPFS supports the following distributions:
Table 4. Linux distributions supported by GPFS
|RHEL 6||RHEL 5||RHEL 4||SLES 11||SLES 10||SLES 9|
|GPFS for Linux on x86 Architecture|
|| V3.2|||X ||
|GPFS for Linux on POWER|
|| V3.2|||X ||
Please also see the questions:
- What are the latest kernel levels that GPFS has been tested with?
- While GPFS runs with many different AIX fixes and Linux kernel levels, it is highly suggested that customers apply the latest fix levels and kernel service updates for their operating system. To download the latest GPFS service updates, go to the GPFS page on Fix Central
Please also see the questions:
GPFS for Linux on Itanium Servers is available only through a special Programming Request for Price Quotation (PRPQ). The install image is not generally available code. It must be requested by an IBM client representative through the RPQ system and approved before order fulfillment. If interested in obtaining this PRPQ, reference PRPQ # P91232 or Product ID 5799-GPS.
Table 5. GPFS for Linux RedHat support
|RHEL Distribution||Latest Kernel Level Tested||Minimum GPFS Level|
||6.0|||2.6.32-71|||GPFS V3.4.0-2 / V3.3.0-9/ |V3.2.1-24|
|5.5||2.6.18-194||GPFS V3.4.0-1 / V3.3.0-5 / V3.2.1-20|
|5.4||2.6.18-164||GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1|
|5.3||2.6.18-128||GPFS V3.4.0-1 / V3.3.0-1/ V3.2.1-1|
|5.2||2.6.18-92.1.10||GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1|
||4.8|||2.6.9-89|||GPFS V3.4.0-3/ V3.3.0-1 / |V3.2.1-1|
||4.7|||2.6.9-78|||GPFS V3.4.0-3/ V3.3.0-1 / |V3.2.1-1|
||4.6|||2.6.9-67.0.7|||GPFS V3.4.0-3/ V3.3.0-1 / |V3.2.1-1|
Table 6. GPFS for Linux SLES support
|SLES Distribution||Latest Kernel Level Tested||Minimum GPFS Level|
|SLES 11 SP1||188.8.131.52-0.7.1||GPFS V3.4.0-1 / V3.3.0-7 / V3.2.1-24|
|SLES 11||184.108.40.206-5||GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-13|
|SLES 10 SP3||220.127.116.11-0.59.1||GPFS V3.4.0-1 / V3.3.0-5 / V3.2.1-18|
|SLES 10 SP2||18.104.22.168-0.27||GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1|
|SLES 10 SP1||22.214.171.124-0.8||GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1|
|SLES 10||126.96.36.199-0.25||GPFS V3.4.0-1 / V3.3.0-1 / V3.2.1-1|
|SLES 9 SP4||2.6.5-7.312||GPFS V3.3.0-1 / V3.2.1-1|
|SLES 9 SP3||2.6.5-7.286||GPFS 3.3.0-1 / V3.2.1-1|
Table 7. GPFS for Linux Itanium support
|Distribution||Latest Kernel Level Tested||Minimum GPFS Level|
|RHEL 4.5||2.6.9-55.0.6||GPFS 3.4.0-1 / V3.3.0-1 / V3.2.1-1|
|SLES 10 SP1||188.8.131.52-0.8|
|SLES 9 SP3||2.6.5-7.286|
- What are the current restrictions on GPFS Linux kernel support?
- Current restriction on GPFS Linux kernel support include:
- GPFS does not support any Linux environments with SELinux enabled in enforcement mode.
- GPFS has the following restrictions on RHEL support:
- GPFS does not currently support the Transparent Huge Page (THP) feature available in RHEL 6.0.. This support should be disabled at boot time by appending transparent_hugepage=never to the kernel boot options.
- GPFS does not currently support the following kernels:
- RHEL hugemem
- RHEL largesmp
- RHEL uniprocessor (UP)
- |GPFS V3.4.0-2, 3.3.0-9, 3.2.1-24, |or later supports RHEL 6.0 |
When installing the GPFS 3.3 base RPMs |on RHEL 6, a symbolic link /usr/bin/ksh to /bin/ksh is required to |satisfy the /usr/bin/ksh dependency.
- |GPFS V184.108.40.206 or later supports RHEL 4.
- GPFS V3.3.0-5 or later supports RHEL 5.5
- GPFS V3.2.1-20 or later supports RHEL 5.5
- RHEL 5.0 and later on POWER requires GPFS V220.127.116.11 or later
- RHEL5.1, the automount option is slow; this issue should be addressed in the 2.6.18-53.1.4 kernel.
- RedHat Kernel 2.6.18-164.11.1 or later requires hot fix package for BZ567479. Please contact RedHat support.
- GPFS has the following restrictions on SLES support:
- GPFS V3.3.0-7 or later supports SLES 11 SP1.
- GPFS V3.3.0-5 or later supports SLES 10 SP3.
- GPFS V3.3 supports SLES 9 SP 3 or later.
- GPFS V3.2.1-24 or later, supports SLES 11 SP1.
- GPFS V18.104.22.168 or later supports SLES 11.
- GPFS V3.2.1-10 or later supports the SLES10 SP2 22.214.171.124-0.34-bigsmp i386 kernel.
- GPFS V3.2.1-18 or later supports SLES 10 SP3.
- GPFS does not support SLES 10 SP3 on POWER 4 machines.
- The GPFS 3.2 GPL build requires imake. If imake was not installed on the SLES 10 or SLES 11 system, install xorg-x11-devel-*.rpm.
- There is required service for support of SLES 10. Please see question What is the current service information for GPFS?
- GPFS for Linux on POWER does not support mounting of a file system with a 16KB block size when running on either RHEL 5 or SLES 11.
- There is service required for Linux kernels 2.6.30 or later, or on RHEL5.4 (2.6.18-164.11.1.el5). Please see question What are the current advisories for GPFS on Linux?
Please also see the questions:
- Is GPFS on Linux supported in a virtualization environment?
- You can install GPFS on a virtualization server or on a virtualized guest OS. When running GPFS on a guest OS the guest must be an OS version that is supported by GPFS and run as an NSD client. GPFS on Linux is supported in the following virtualization environments installed on the virtualization servers:
- GPFS V3.2.1-3, GPFS V3.3.0-7 and V3.4, or later, support the RHEL xen kernel for NSD clients only.
- GPFS V3.2.1-21, V3.3.0-7, and V3.4, or later, support the SLES xen kernel for NSD clients only.
- GPFS has been tested with VMware ESX 3.5 for NSD clients only and is supported on all Linux distros that are supported by both VMware and GPFS.
- GPFS has been tested with guests on RHEL 6.0 KVM hosts for NSD clients only and is supported on all Linux distros that are supported by both RHEL 6.0 kvm host and GPFS.
- What levels of the Windows O/S are supported by GPFS?
Table 8. Windows O/S support
|Windows Server 2003 R2 x64||Windows Server 2008 x64 (SP 2)||Windows Server 2008 R2|
Also see the questions:
- What are the limitations of GPFS support for Windows ?
- What are the current advisories for all platforms supported by GPFS?
- What are the current advisories for GPFS on Windows?
- What are the limitations of GPFS support for Windows ?
- Current® limitations include:
- GPFS for Windows does not support a file system feature called Directory Change Notification. This limitation can have adverse effects when GPFS files are exported using Windows files sharing. In detail, the issue relates to the SMB2 protocol used on Windows Vista and later operating systems. Because GPFS does not support Directory Change Notification, the SMB2 redirector cache on the client will not see cache invalidate operations if metadata is changed on the server or on another client. The SMB2 client will continue to see its cached version of the directory contents until the redirector cache expires. Hence, client systems may see an inconsistent view of the GPFS namespace. A workaround for this limitation is to disable the SMB2 protocol on the server. This will ensure that the SMB2 is not used even if the client is SMB2 capable. To disable SMB2, follow the instructions under the "MORE INFORMATION" section at http://support.microsoft.com/kb/974103
- In GPFS homogeneous Windows clusters (GPFS V3.4 or later), the Windows nodes can perform most of the management and administrative operations. The exceptions include:
- Certain GPFS commands to apply policy, administer quotas and ACLs.
- Support for the native Windows Backup utility.
Please refer to the GPFS Concepts, Planning and Installation Guide for a full list of limitations.
- Tivoli Storage Manager (TSM) Backup Archive 6.2 client is only verified to work with GPFS V3.3. See the TSM Client Functional Compatibility Table at http://www-01.ibm.com/support/docview.wss?uid=swg21420322.
- There is no migration path from Windows Server 2003 R2 (GPFS V3.2.1-5 or later) to Windows Server 2008 (GPFS V3.3).
To move GPFS V126.96.36.199 or later Windows nodes to GPFS V3.3:
- Remove all the Windows nodes from your cluster.
- Uninstall GPFS 188.8.131.52 from your Windows nodes. This step is not necessary if you are reinstalling Windows Server 2008 from scratch (next step below) and not upgrading from Server 2003 R2.
- Install Windows Server 2008 and the required prerequisites on the nodes.
- Install GPFS 3.3 on the Windows Server 2008 nodes.
- Migrate your AIX and Linux nodes from GPFS 3.2.1-5 or later, to GPFS V3.3.
- Add the Windows nodes back to your cluster.
See the GPFS documentation at http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html for details on uninstalling, installing and migrating GPFS.
- Windows only supports the DEFAULT and AUTHONLY ciphers.
- A DMAPI-enabled file systems may not be mounted on a Windows node.
- What are the requirements for the use of OpenSSH on Windows nodes?
- GPFS uses the SUA Community version of OpenSSH to support its administrative functions when the cluster includes Windows nodes and UNIX nodes. Microsoft does not provide SSH support in the SUA Utilities and SDK, and the remote shell service included with SUA has limitations that make it unsuitable for GPFS. Interop Systems Inc. hosts the SUA Community Web site (http://www.interopsystems.com/community/), which includes a forum and other helpful resources related to SUA and Windows/UNIX interoperability. Interop Systems also provides SUA Add-on Bundles that include OpenSSH (http://www.suacommunity.com/tool_warehouse.aspx) and many other packages; however, IBM recommends installing only the SUA Community packages that your environment requires. The steps below outline a procedure for installing OpenSSH. This information could change at any time. Refer to the Interop Community Forums (http://www.suacommunity.com/forum/default.aspx) for the current and complete installation instructions:
- Download the Bootstrap Installer (6.0/x64) from Package Install Instructions (http://www.suacommunity.com/pkg_install.htm) and install it on your Windows nodes.
- From an SUA shell, run pkg_update -L openssh
- Log on as root and run regpwd from an SUA shell.
Complete the procedure as noted in the GPFS Concepts, Planning, and Installation Guide under the heading "Installing and configuring OpenSSH".
- Can different GPFS maintenance levels coexist?
- Certain levels of GPFS can coexist, that is, be active in the same cluster and simultaneously access the same file system. This allows for rolling upgrades of GPFS nodes within a cluster. Further it allows the mounting of GPFS file systems from other GPFS clusters that may be running a different maintenance level of GPFS. The current maintenance level coexistence rules are:
- All GPFS V3.4 maintenance levels can coexist with each other and with GPFS V3.3 Maintenance Levels, unless otherwise stated in this FAQ.
- All GPFS V3.3 maintenance levels can coexist with each other and with GPFS V3.2 Maintenance Levels, unless otherwise stated in this FAQ.
- All GPFS V3.2 maintenance levels can coexist with each other, unless otherwise stated in this FAQ.
See the Migration, coexistence and compatibility information in the GPFS V3.2 Concepts, Planning, and Installation Guide
- The default file system version was incremented in GPFS 3.2.1-5. File systems created using GPFS v184.108.40.206 code without using the --version option of the mmcrfs command will not be mountable by earlier code.
- GPFS V3.2 maintenance levels 220.127.116.11 and 18.104.22.168 have coexistence issues with other maintenance levels.
Customers using a mixed maintenance level cluster that have some nodes running 22.214.171.124 or 126.96.36.199 and other nodes running other maintenance levels should uninstall the gpfs.msg.en_US rpm/fileset from the 188.8.131.52 and 184.108.40.206 nodes. This should prevent the wrong message format strings going across the mixed maintenance level nodes.
Attention: Do not use the mmrepquota command if there are nodes in the cluster running a mixture of 220.127.116.11 and other maintenance levels. A fix is provided in APAR #IZ16367.
- Are there any requirements for Clustered NFS (CNFS) support in GPFS?
- GPFS supports Clustered NFS (CNFS) on SLES 11, SLES 10, SLES 9, RHEL 5 and RHEL 4. However there are limitations:
- NFS v3 exclusive byte-range locking works properly only on clients of:
- x86-64 with SLES 10 SP2 or later, SLES 11, and RHEL 5.4
- ppc64 with SLES 11 and RHEL 5.4
- Kernel patches are required for distributions prior to SLES 10 SP2 and RHEL 5.2:
Table 9. CNFS requirementsSee also What Linux kernel patches are provided for clustered file systems such as GPFS?
|lockd patch required||sm-notify required||rpc.statd required|
|SLES 10 SP1 and prior||X||X||not required|
|SLES 9||X||X||not required|
|RHEL 5.1 and prior||X (not available for ppc64)||included in base distribution||X|
|RHEL 4||X (not available for ppc64)||included in base distribution||X|
- Does GPFS support NFS V4?
- Enhancements to the support of Network File System (NFS) V4 in GPFS are available on
- AIX V5.3 systems with the minimum technology level of 5300-04 applied, AIX V6.1 or AIX V7.1.
- GPFS V3.3 and 3.4 support NFS V4 on the following Linux distributions:
- RHEL 5.5
- RHEL 6.0
- SLES 11 SP1
- Delegations must be disabled if a GPFS file system is exported over Linux/NFSv4 on RHEL5.2 by running echo 0 > /proc/sys/fs/leases-enable on the RHEL5.2 node. Other nodes can continue to grant delegations (for NFSv4) and/or oplocks (for CIFS). On all platforms, only read delegations are supported - there is no impact of this on applications.
- GPFS cNFS does not support NFSv4.
- Windows-based NFSv4 clients are not supported with Linux/NFSv4 servers because of their use of share modes.
- If a file system is to be exported over NFSv4/Linux, then it must be configured to support POSIX ACLs (with -k allor -k posixoption). This is because NFSv4/Linux servers will only handle ACLs properly if they are stored in GPFS as posix ACLs.
- SLES clients do not support NFSv4 ACLs.
- Concurrent AIX/NFSv4 servers, Samba servers and GPFS Windows nodes in the cluster are allowed. NFSv4 ACLs may be stored in GPFS filesystems via Samba exports, NFSv4/AIX servers, GPFS Windows nodes, ACL commands of Linux NFSv3 and ACL commands of GPFS. However, clients of Linux v4 servers will not be able to see these ACLs, just the permission from the mode.
Table 10. Readiness of NFSv4 support on different Linux distros with some patches
|Redhat 5.5 and 6.0||Sles 11 SP1|
|ACLs||Yes||Yes as a server. No as a client.|
|For more information on the support of NFS V4, |please see the GPFS documentation updates file athttp://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.gpfs.doc/gpfsbooks.html
- Are there any requirements for the use of the Persistent Reserve support in GPFS?
- GPFS support for Persistent Reserve on AIX requires:
- For GPFS V3.2 on AIX 5L™ V5.2, APAR IZ00673
- For GPFS V3.2, V3.3, or V3.4 on AIX 5L V5.3, APARS IZ01534, IZ04114, and IZ60972
- For GPFS V3.2, V3.3 , or V3.4 on AIX V6.1, APAR IZ57224
- Are there any considerations when utilizing the Simple Network Management Protocol (SNMP)-based monitoring capability in GPFS?
- Considerations for the use of the SNMP-based monitoring capability in GPFS V3.2, V3.3 and V3.4 include:
- The SNMP collector node must be a Linux node in your GPFS cluster. GPFS utilizes Net-SNMP which is not supported by AIX.
- Support for ppc64 requires the use of Net-SNMP 5.4.1. Binaries for Net-SNMP 5.4.1 on ppc64 are not available. You will need to download the source and build the binary. Go to http://net-snmp.sourceforge.net/download.html
- If the monitored cluster is relatively large, you need to increase the communication time-out between the SNMP master agent and the GPFS SNMP subagent. In this context, a cluster is considered to be large if the number of nodes is greater than 25, or the number of file systems is greater than 15, or the total number of disks in all file systems is greater than 50. For more information see Configuring Net-SNMP in the GPFS: Advanced Administration Guide.