Increase NFS performance: Linux nconnect

Increase NFS performance: Linux nconnect

As we all know, the default behavior for transferring data is usually session based. NFS is also one of those protocols that will try to handle all the traffic through 1 session. On the client side you can manipulate this by using the nconnect option. This is available since linux kernel 5.3.
As a Ubuntu server 20.04 user I’m in luck, as it uses linux kernel version 5.4.

So, now … how does this work and what are the benefits?

First of all, it’s not that hard to use. You just add the nconnect option to the mount option list -o and allow multiple tcp sessions to be created.

example : mount -t nfs -o rw,nconnect=8 10.50.3.10:/data /media/user/data

My advice would be here to increase nconnect by the available interface ports on the storage side. So 1 session per port for maximum throughput. The max nconnect session is 16, so keep that in mind when setting up your storage architecture.

Also, don’t worry about any services that are using the nfs mount to access data. That is still just 1 session to the service or application.


Now, the results :

My test setup had 8x 1gig nic’s on the client side and 12x 1gig nic’s on the storage side.

Result : mount -t nfs -o rw 10.50.3.10:/data /media/user/data

Traffic would flow around a 100MB/s which makes sense.
(1000mbps/8=125MB/s with some overhead)

Result : mount -t nfs -o rw,nconnect=8 10.50.3.10:/data /media/user/data

Traffic would flow around a 750MB/s which also makes sense.
(8x 1000mbps/8=1000MB/s with some overhead)


The gain :

Before, single-client bandwidth-intensive services were limited by how much bandwidth 1 interface had on the storage side (or client side) and you had to scale accordingly. Client’s were getting throttled as a result and were unable to take advantage of the full potential of your storage array.

With the nconnect mount option, services or app’s can now easily take full advantage of the infrastructure’s capabilities and deliver faster access to there storage.


.

9 Responses so far.

  1. In my case, the NFS share becomes completely inoperable, with any value of nconnect, from 2 to 16. When you try to write to a network folder, mounted with this parameter, mc freezes, and it becomes impossible to work with a share until rebooting the system.

  2. There are a couple of things:
    1. How many interfaces does your client and server have?
    2. Do you have jumbo frames enabled on the client/server side and on your switch infra in between?
    3. what is the cpu load on the server side when this happens?
    4. What is your OS

    based on that I can give you a more detailed answer.

  3. Thanks for your reply.
    1. Intel Original X550T2BLK 2xRG45 10Gb/s on both NAS and miner. Working via Cisco SX350X-52 52-Port 10GBase-T Stackable Managed Switch.
    2. About jumbo frames – should we use it or not? Looks like now everything is running with default MTU 1500.
    3. Absolutely ordinary CPU load. The CPU is almost idle.
    4. Ubuntu 20.04 on miner and TrueNAS-12.0-U4 (FreeBSD 12.2-RELEASE-p6) on NAS.

  4. Impossible result! 4x 1000Gbit links on the CLIENT is the max throughput bandwidth available. Expect ~350-400MB/sec max. Not 8x 1000 (what the server side has available). Lowest common denomination wins.

  5. Hello,

    Just found your article, which is interesting since I am scratching head on how to increase bandwidth with NFS

    I have a storage NAS with 10G NIC, so server side there is no issue, however my clients are some NUCs which have 2xNICs only (1G+1G, or 1G+2.5G), I do not use jumbo frame and all running Linux (Debian 10/11, Ubuntu 20.04), in this case, will nconnect parameter useful for me?

  6. You are right, I updated the post … I wrote it while I was testing and the final setup is 8x on client side and 12 on server side … which makes more sense on the results (I updated the post to reflect that)

    Thanks Spiffy for pointing that out 🙂

  7. I have two 1gb NICs on nfs client: 192.168.1.128 and 192.168.1.200
    and two 1gb NICs on nfs server: 192.168.1.139 and 192.168.1.160

    `sudo mount -t nfs -o rw,nconnect=4 192.168.1.160:/mnt/sdp/share /mnt/nfs/`
    after mount, there are 4 tcp connects established.
    netstat -anpt | grep 160:2049
    tcp 0 0 192.168.1.128:1021 192.168.1.160:2049 ESTABLISHED –
    tcp 0 0 192.168.1.128:50082 192.168.1.160:2049 TIME_WAIT –
    tcp 0 0 192.168.1.128:971 192.168.1.160:2049 ESTABLISHED –
    tcp 0 0 192.168.1.128:967 192.168.1.160:2049 ESTABLISHED –
    tcp 0 0 192.168.1.128:865 192.168.1.160:2049 ESTABLISHED –

    dd test shows data only transfer from 192.168.1.128 to 192.168.1.160

  8. Hey Zhua, I think you need to change the loadbalance algoritm to tlb or alb on your bond to loadbalance that better. What are you using now?

Leave a Reply

Your email address will not be published. Required fields are marked *

5 × 3 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.