Difference between 'sync' and 'async' mount options

What is the difference between sync and async mount options from the end-user point of view? Is file system mounted with one of these options works faster than if mounted with another one? Which option is the default one, if none of them is set?

man mount says that sync option may reduce lifetime of flash memory, but it may by obsolete conventional wisdom. Anyway this concerns me a bit, because my primary hard drive, where partitions / and /home are placed, is SSD drive.

Ubuntu installer (14.04) have not specified sync nor async option for / partition, but have set async for /home by the option defaults. Here is my /etc/fstab, I added some additional lines (see comment), but not changed anything in lines made by installer:

# / was on /dev/sda2 during installation
UUID=7e4f7654-3143-4fe7-8ced-445b0dc5b742 /     ext4  errors=remount-ro 0  1
# /home was on /dev/sda3 during installation
UUID=d29541fc-adfa-4637-936e-b5b9dbb0ba67 /home ext4  defaults          0  2
# swap was on /dev/sda4 during installation
UUID=f9b53b49-94bc-4d8c-918d-809c9cefe79f none  swap  sw                0  0

# here goes part written by me:

# /mnt/storage
UUID=4e04381d-8d01-4282-a56f-358ea299326e /mnt/storage ext4 defaults  0  2
# Windows C: /dev/sda1
UUID=2EF64975F6493DF9   /mnt/win_c    ntfs    auto,umask=0222,ro      0  0
# Windows D: /dev/sdb1
UUID=50C40C08C40BEED2   /mnt/win_d    ntfs    auto,umask=0222,ro      0  0

So if my /dev/sda is SSD, should I – for the sake of reducing wear – add async option for / and /home file systems? Should I set sync or async option for additional partitions that I defined in my /etc/fstab? What is recommended approach for SSD and HDD drives?

Asked By: user77422


async is the opposite of sync, which is rarely used. async is the default, you don’t need to specify that explicitly in releases of nfs-utils up to and including 1.0.0. In all releases after 1.0.0, sync is the default, and async must be explicitly requested if needed.

The option sync means that all changes to the according filesystem are immediately flushed to disk; the respective write operations are being waited for. For mechanical drives that means a huge slow down since the system has to move the disk heads to the right position; with sync the userland process has to wait for the operation to complete. In contrast, with async the system buffers the write operation and optimizes the actual writes; meanwhile, instead of being blocked the process in userland continues to run. (If something goes wrong, then close() returns -1 with errno = EIO.)

SSD: I don’t know how fast the SSD memory is compared to RAM memory, but certainly it is not faster, so sync is likely to give a performance penalty, although not as bad as with mechanical disk drives. As of the lifetime, the wisdom is still valid, since writing to a SSD a lot "wears" it off. The worst scenario would be a process that makes a lot of changes to the same place; with sync each of them hits the SSD, while with async (the default) the SSD won’t see most of them due to the kernel buffering.

In the end of the day, don’t bother with sync, it’s most likely that you’re fine with async.

Answered By: countermode

Words of caution: using the ‘async’ mount option might not be the best idea if you have a mount that is constantly being written to (ex. valuable logs, security camera recordings, etc.) and you are not protected from sudden power outages. It might result in missing records or incomplete (useless) data. Not-so-smart example: imagine a thief getting into a store and immediately cutting the camera power cable. The video recording of the break-in was recorded but might not have been flushed/synced to the disk since it (or parts of it) might have been buffered in memory instead, thus got lost when the camera lost power.

Answered By: Andreas Mikael Bank

for what it’s worth, as of 2022 and RHEL 7.9

servers using self-encrypting SSD’s or a few with Dell BOSS M.2 for the linux operating system going over 100gbps HDR infiniband… by default NFS connects as sync under version 4.1 and protocol=tcp. I cannot get nfs v4.2 to work even though /cat proc/fs/nfsd/versions shows +4.2 but I don’t know how much better nfs 4.2 could be over 4.1.

I tried /etc/exports with /scratch *(rw) which inherently does sync and also with /scratch *(rw,async) and saw no difference in an rsync --progress <source> <dest> for a single nfs file copy of a 5gb tar file which averaged 460 MB/sec (max burst of 480). A local copy of same file to another folder on same server (not over the network) averaged 435 MB/sec. For reference I always get solid 112MB/sec ssh scp speed over traditional 1gbps copper.

/etc/exports   on rhel-7.9 nfs-server

    /scratch *(rw,no_root_squash)

exportfs -v  on rhel-7.9 nfs-server

    /scratch         <world>(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

mount    on rhel 7.9 nfs-client

    server:/scratch on /scratch type nfs4 (rw,nosuid,noexec,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=,_netdev)

/etc/fstab      on rhel 7.9 nfs-client   /scratch   nfs4   _netdev,defaults,nosuid,noexec 0 0

also : https://www.admin-magazine.com/HPC/Articles/Useful-NFS-Options-for-Tuning-and-Management (no date on article, makes no mention of nfs v3 vs v4)

Most people use the synchronous option on the NFS server. For synchronous writes, the server replies to NFS clients only when the data has been written to stable storage. Many people prefer this option because they have little chance of losing data if the NFS server goes down or network connectivity is lost.

Asynchronous mode allows the server to reply to the NFS client as soon as it has processed the I/O request and sent it to the local filesystem; that is, it does not wait for the data to be written to stable storage before responding to the NFS client. This can save time for I/O requests and improve performance. However, if the NFS server crashes before the I/O request gets to disk, you could lose data.

Synchronous or asynchronous mode can be set when the filesystem is mounted on the clients by simply putting sync or async on the mount command line or in the file /etc/fstab for the NFS filesystem. If you want to change the option, you first have to unmount the NFS filesystem, change the option, then remount the filesystem.

If you are choosing to use asynchronous NFS mode, you will need more memory to take advantage of async, because the NFS server will first store the I/O request in memory, respond to the NFS client, and then retire the I/O by having the filesystem write it to stable storage. Therefore, you need as much memory as possible to get the best performance.

The choice between the two modes of operation is up to you. If you have a copy of the data somewhere, you can perhaps run asynchronously for better performance. If you don’t have copies or the data cannot be easily or quickly reproduced, then perhaps synchronous mode is the better option. No one can make this determination but you.

Answered By: ron
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.