Why are swap partitions discouraged on SSD drives, are they harmful?

I often read that one should not place swap partitions on a SSD drive, as this may harm the device. Is this true? Can you please explain the reason to me?

Because I otherwise would have thought that placing swap on an SSD is the best choice, as it’s much faster than HDDs and therefore swapping RAM contents to the SSD is not as slow as it would be with the HDD…

Asked By: Byte Commander


Early SSDs had a reputation for failing after fewer writes than HDDs. If the swap was used often, then the SSD may fail sooner. This might be why you heard it could be bad to use an SSD for swap.

Modern SSDs don’t have this issue, and they should not fail any faster than a comparable HDD. Placing swap on an SSD will result in better performance than placing it on an HDD due to its faster speeds.

Additionally, if your system has enough RAM (likely, if the system is high-end enough to have an SSD), the swap may be used only rarely anyway.

Answered By: mablem8

Flash RAM cells in SSDs have a limited lifespan. Every write (but not read) cycle (or more accurately every erasure) wears a memory cell, and at some point it will stop working.

The amount of erase cycles a cell can survive is highly variable, and flash from modern SSDs will survive many more than flash from SSDs made several years ago. Additionally, the SSD intelligent firmware will ensure evenly distributed erasures between all the cells. In most drives, unused areas will also be available to backup damaged cells and to delay aging.

To have a value we can use to compare the endurance of a SSD, we can use lifespan measures such as the JEDEC published standards. A widely available value for endurance is TBW (TeraBytes Written, or alternatively total bytes written) which is the amount of bytes writeable before the drive fails. Modern SSDs can score as low as 20 TB for a consumer product but can score over 20,000 TB in an enterprise-level SSD.

Having said that, both the lifespan and the use of a SSD for swapping depends on several factors…

Systems with plenty of RAM

On a system with plenty of RAM and few memory consuming applications we will almost never swap. It is merely a safety measure to prevent data loss in case an application ate up all our RAM. In this case, the wearing of a SSD from swapping will not be an issue. However, having this mostly-unused swap partition on a conventional hard drive will not lead to any performance drop, so we can safely put our swap partition (or file) on that significantly cheaper hard drive and use the space on our SSD for something more useful.

Systems with little RAM

Things are different on a system where RAM is sparse and cannot be upgraded. In this case, swapping may indeed occur more often, especially when we run memory-intensive applications. In these systems, a swap partition or file on a SSD may lead to a dramatic performance improvement at the cost of a somewhat shorter SSD lifespan. This decreased lifespan may, however, still not be short enough to warrant concern. In all likelihood, the SSD may be replaced long before it would’ve died because several times the storage may be available at a fraction of today’s prices.

Hibernating our system

Waking from hibernation is indeed very fast from a SSD. If we’re lucky and our system survives a hibernation without issues, we can consider using an SSD for that. It will wear the SSD more than just booting from it would, but we may feel it’s worth it.

But booting from an SSD may not take much longer than waking from hibernation from an SSD, and it will wear the SSD far less. Personally, I don’t hibernate my system at all – I suspend to RAM or quickly boot from my SSD.

The SSD is the only drive we have

We don’t really have a choice in this case. We don’t want to run without a swap, so we have to put it on the SSD. We may, however, want to have a smaller swap file or partition if we don’t plan to hibernate our system at any point.

Note on speed

SSDs are best at quickly accessing and reading many small files and are superior to conventional hard drives for transferring data from sequentially-read small or medium-sized files. A fast conventional hard drive may still perform better than an SSD at writing (and to a lesser extent reading) large audio or video streams or other long unfragmented files. Older SSDs may have their performance decline over time or after they are fairly full.

Answered By: Takkat

HDD technology uses a magnetic process for data manipulation and storage. This process is noninvasive, meaning you can pretty much manipulate data on a disk drive infinitely. That is until the mechanics start to fail. In contrast SSD technology does not run the risk of mechanical failure. But what is a concern is how it stores its data. For data storage SSDs use controlled bursts of electrical energy. The semiconductors that are hit with this electric current slowly wear out from the process as they are used over time.

This process has been improved upon through software and hardware updates. Early adapters found that OS’s were not programmed to properly store data the way an SSD does. This adversely put the SSD through large amounts of read/write cycles. Also most older BIOSs do not properly recognize an SSD and this caused issues as well.

The introduction of UEFI and OS’s updates corrected most of the issues that early SSD owners had. Also, as with any production process, SSDs themselves have gotten better at managing and maintaining the degradation of NAND flash drives.

However, it is still a concern that your SSD has a limited amount of read/write cycles before it can no longer store data. Although, that concern is just as marginal as your HDD failing.

There’s a very in-depth podcast about the subject here if you’d like to read up on the topic further.

Answered By: Arkanoid

Even if you have enough RAM, you might still want to prevent any file copy or search to swap out the applications from RAM. This might be the case on file servers (NAS, SAMBA, FTP) which might be involved in large file operations.

In order to do that it’s best to set in /etc/sysctl.conf:


The first setting prevents disk cache (e.g. doing cp) from swapping out existing apps from RAM. The normal default setting on that is 60. Note that using 0, although more aggressive, has been sometimes reported to generate out-of-memory errors.

The second setting prevents file searches (e.g. doing find) from swapping out existing apps from RAM. The normal default setting for that is 100.

Although the author mentioned in reference does not refer explicitly to SSD’s, this approach also reduces wear on SSD due to reduced swapping and he also provides example how to test it.

Reference: https://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that

Answered By: Dorian B.

Life Vs Performance Balance.

You bought an SSD for its performance advantages and not simply for increasing battery life right? So use your SSD for that very purpose, to make your system quicker.

If you can afford to add more RAM to reduce *swap I/O then this will clearly increase the life span of your SSD as another performance drain is obviously I/O cycles to swap space on a filesystem.

Again like so many aspects of your system configuration it’s often not down to one single rule adoption that fits all. User needs differ and as such system requirements and thus configuration must differ in order to meet these needs, put simply it boils down to how you configure your system.

If you have the space to hold an SSD in addition to your none SSD drive, then write files that will rarely change to your none SSD drive and keep often accessed files on your SSD drive.
This will ensure that …

[1] – The *trim features will have the resources to perform the necessary steps to evenly use all of the drive. [Benefit = Life]

[2] – Your I/O latency will be reduced with the high speed SSD device being used to access an often accessed filesystem. [Benefit = Performance]

Configure your temp filesystem to utilise space when required for your particular system needs, if you have enough RAM then consider setting your swappiness level to be less aggressive, this will ensure that…

[1] – SSD I/O is reduced yet your system will still meet the demands of its user(s). [Benefit = Life]

Do you really need all off those logs? Consider what your system is logging and where.

[1] – SSD I/O is reduced as log file access is reduced. [Benefit = Life & Performance]

There are a heap of other aspects to your system configuration that can make a none SSD system perform faster, default system builds have a tough metric to fulfil, pure performance or keeping data safe and secure or a balanced mixture of them all.
If you apply the same mentality to what you write and to which device, you can drastically increase both performance and at the same time increase the life span of your SSD.

*swap – Remember this isn’t just used when resources are low, the swappiness configurable for many Linux distro’s out of the box by default will park long running processes of low priority further down the performance ladder into swap space)

*Trim – worth verifying you have it enabled, a good article on what trim is and how it works: http://searchstorage.techtarget.com/definition/TRIM

Answered By: Rob Lawton

The accepted answer explains the theory; I’d figure I might add a little real-life data from two of my systems.

Desktop system

  • Has a 400GB Intel 750 SSD.
  • Has 32iGB of RAM; swap isn’t needed often.
  • However, it hibernates regularly (say, once a day), requiring a large swap write.
  • Has been in use for just over 4 years.
  • Runs Debian on ext4, and a swap partition.
  • Used to contain a Windows 10 installation for a couple of years, but no longer.
  • Has no configuration to spare the SSD (no swappiness tweaking, etc).

According to SMART, it has seen 28TiB lifetime writes (19GiB/day). The ext4 filesystem has seen 18TiB lifetime writes (12GiB/day). The remainder is due to swap and the Windows installation.

According to Intels SSD Toolbox, the drive is in excellent health and has about 95% of its lifetime remaining:

Intel SSD Toolbox summary


  • An Acer Aspire ES1-132, in use for 3 years.
  • Has some 60GB MMC SSD.
  • Has 4GiB memory, so probably more swap pressure; although system usage tends to be fairly light.
  • Rarely hibernates.
  • Runs Debian 10 on ext4, with a separate swap partition.
  • Has no configuration to spare the SSD (no swappiness tweaking, etc).

I can’t seem to obtain the total device writes, but the ext4 filesystem has seen almost exactly 1TiB writes (1GiB/day). According to mmc-utils:

# mmc extcsd read /dev/mmcblk0 | egrep -i 'life|eol'
eMMC Life Time Estimation A [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_A]: 0x01
eMMC Life Time Estimation B [EXT_CSD_DEVICE_LIFE_TIME_EST_TYP_B]: 0x01
eMMC Pre EOL information [EXT_CSD_PRE_EOL_INFO]: 0x01

Which means 0-10% of the SSD’s spare blocks have been used, and the drive has "Normal pre-EOL status". The way I interpret that, the drive has >90% of its life remaining.


Two very different systems, both used for years with swap on an SSD, and both systems are totally fine. Based on the diagnostics, both SSDs have more than 90% of their life left.

To be fair, both systems are probably light on swap usage. Systems with a lot more memory pressure will see more swap writes, and therefore also more SSD wear. But for normal desktop use with some occasional light swap usage, I don’t see a problem with putting swap on an SSD.

Answered By: marcelm

Jan 2021. I use a small, dedicated enterprise grade SSD as a swap drive. These enterprise drives can be bought for as little as $80 for 240GB right now, and are 3D nand with load leveling and other valuable improvements for swap.

By using the drive only for swap, you pretty much guarantee it won’t affect your expensive terabyte level data drive should it fail, and you still get the performance of SSD. Estimates for very heavy use are about 2 1/2 years.

Answered By: RGRHON
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.