NFS over openVPN

This was originally a post from late February 2009, but I’ve recently got around playing with NFS over OpenVPN again… So here goes, the original post with some updates!

In my previouse post I got around fixing my speed issues with OpenVPN, now it was time to get my hands dirty with NFS. NFS is a great for LAN enviroment, but when it comes to transfer files over WAN, NFS needs to tuned! (side note: NFS should be tuned in any enviroment….)

If you have remotely touched the subject of tuning NFS, you will know that rsize/wsize and TCP vs UDP has been mention, and this article is not a exeption…. These options are vitale for tuning a NFS setup. I will not go into explaining all the different option and what they mean. I will basicly just explain how I got around tuning my setup.

In my scenario I’m only interreseted in reading performance, but the same test should work just fine for tuning write performance as well. Basicly I’m creating a dummy file using `dd`the bigger your file is, the more exact numbers you will get. But since you might be working over a slow WAN, be sensible!

The testing method

#Creating dummy file on server (will create a 64MB dummy file)
dd if=/dev/zero of=/some_path/testfile bs=8k count=8192

#Mount with or without options on client – variable as selfexplaing…
mount_nfs -o $options $server:/path $local_path

#Time a dummy file transfer on client
time dd if=$client_mount/$filename of=/dev/null bs=16k

Additionally I also started a ping between the host. This is essential, because latency has a big impact on performance.

Last time I was playing around with this, I used NFSv3, but now NFSv4 is getting more common, and it was time to tune my setup to fit NFSv4.Well first you need to setup NFSv4.

Server Setup

OpenVPN Device: TUN
Protocol: UDP
MTU Size: 1500
Encryption: BF-CBC
LZO-Compression: “On”

File: /etc/exports
V4: /mnt
/mnt/share1 -ro -network -mask
/mnt/share2 -ro -network -mask

Client Setup

#Complete .conf file for OpenVPN Client
dev tun
proto udp
tun-mtu 1500
remote 1194
pkcs12 xxxxx.p12
cipher BF-CBC
verb 3
ns-cert-type server

mount command:
mount_nfs -o vers=4.0alpha,ro,async,noatime host://share1 /mnt/share1
mount_nfs -o vers=4.0alpha,ro,async,noatime host://share2 /mnt/share2

Test Results

Normal NFSv4
811 packets transmitted, 811 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 13.302/19.622/36.311/2.702 ms
67108864 bytes transferred in 734.698714 secs (91342 bytes/sec)

NFSv4 with ro,async,noatime
1176 packets transmitted, 1176 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 12.980/18.780/34.096/3.033 ms
67108864 bytes transferred in 586.729412 secs (114378 bytes/sec)

NFSv4 with ro,async,noatime and LZO-Compression
2747 packets transmitted, 2747 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 12.580/17.163/42.907/3.502 ms
67108864 bytes transferred in 427.970607 secs (156807 bytes/sec)

Same as above, just as a controll
1340 packets transmitted, 1340 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 12.447/17.406/725.741/19.685 ms
67108864 bytes transferred in 429.265232 secs (156334 bytes/sec)

This is just some raw data from my testing. The ping was started in another shell, just to get an idea on the latency while doing the proper transfer test. As

I looks I’m now up at speed above 10Mbit/s…..
compared to my last result witch variated from 5-8 Mbit/s

One Comment to

  • mike20/09/2011


    Can you tell me how you routed/iptables your configuration to allow nfs to work on both side?


  • Leave a Comments