Best Practices NFS Mount for Data Domain

EMC Data Domain - DD - DataDomain

I was recently setting up a DD160 for a small Oracle instance and came across an issue when trying to mount the data domain via NFS. I tried several variations of the mount command and realized that I needed to be very specific. Since I was simply using a basic mount command the OS was missing some vital information about how it should connect to the data domain. I also realized that even though this Oracle database is reletivily small, the NFS mount should still be optimized for performance. Below is the message I was receiving:

mount: wrong fs type, bad option, bad superblock on datadomain:/data/col1/OracleBackup, or too many mounted file systems


I was trying to mount via NFS on an old Redhat version 2 OS and was having a hard time with the options. I consulted with one of our linux data domain experts and he sent me the following syntax:


mount -t nfs -o hard,intr,nfsvers=3,tcp,rsize=1048600,wsize=1048600,bg datadomain:/data/col1/OracleBackup /OracleBackup

The NFS mount was successful and we are now using optimized NFS mount parameters, but careful analysis of your environment, both from the client and from the data domain point of view, is the first step necessary for optimal NFS performance. Besides general network configuration: appropriate network capacity, faster NICs, full duplex settings in order to reduce collisions, agreement in network speed among the switches and hubs, etc… one of the most important client optimization settings are the NFS data transfer buffer sizes, specified by the mount command options rsize and wsize.

The mount command options rsize and wsize specify the size of the chunks of data that the client and datadomain pass back and forth to each other. If no rsize and wsize options are specified, the default varies by which version of NFS we are using. The most common default is 4K, although for TCP-based mounts in 2.2 kernels, and for all mounts beginning with 2.4 kernels, the datadomain specifies the default block size.

The theoretical limit for the NFS V2 protocol is 8K. For the V3 protocol, the limit is specific to the data domain. On the Linux server, the maximum block size is defined by the value of the kernel constant NFSSVC_MAXBLKSIZE, found in the Linux kernel source file ./include/linux/nfsd/const.h. The current maximum block size for the kernel, as of 2.4.17, is 8K (8192 bytes), but the patch set implementing NFS over TCP/IP transport in the 2.4 series, as of this writing, uses a value of 32K (defined in the patch as 32*1024) for the maximum block size.

All 2.4 clients currently support up to 32K block transfer sizes, allowing the standard 32K block transfers across NFS mounts from other servers, such as Solaris, without client modification. The defaults may be too big or too small, depending on the specific combination of hardware and kernels. On the one hand, some combinations of Linux kernels and network cards (largely on older machines) cannot handle blocks that large. On the other hand, if they can handle larger blocks, a bigger size might be faster.

You will want to experiment and find an rsize and wsize that works and is as fast as possible. You can test the speed of your options with some simple commands, if your network environment is not heavily used.


Leave a Reply