How to fill 90% of the free memory?

I want to do some low-resources testing and for that I need to have 90% of the free memory full.

How can I do this on a *nix system?

From this HN comment: https://news.ycombinator.com/item?id=6695581

Just fill /dev/shm via dd or similar.

swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
Answered By: damio

How about ramfs if it exists? Mount it and copy over a large file?
If there’s no /dev/shm and no ramfs – I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.

Answered By: nemo

How abount a simple python solution?

#!/usr/bin/env python

import sys
import time

if len(sys.argv) != 2:
    print "usage: fillmem <number-of-megabytes>"
    sys.exit()

count = int(sys.argv[1])

megabyte = (0,) * (1024 * 1024 / 8)

data = megabyte * count

while True:
    time.sleep(1)
Answered By: swiftcoder

You can write a C program to malloc() the required memory and then use mlock() to prevent the memory from being swapped out.

Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.

Answered By: Chris
  1. run linux;
  2. boot with mem=nn[KMG] kernel boot parameter

(look in linux/Documentation/kernel-parameters.txt for details).

Answered By: Anon

If you want to test a particular process with limited memory you might be better off using ulimit to restrict the amount of allocatable memory.

Answered By: sj26

I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner

The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.

Answered By: Robert Metzger

I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248

function malloc() {
  if [[ $# -eq 0 || $1 -eq '-h' || $1 -lt 0 ]] ; then
    echo -e "usage: malloc NnnAllocate N mb, wait, then release it."
  else 
    N=$(free -m | grep Mem: | awk '{print int($2/10)}')
    if [[ $N -gt $1 ]] ;then 
      N=$1
    fi
    sh -c "MEMBLOB=$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
  fi
}
Answered By: valadil

I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.

That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.

Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.

Answered By: David Spillett

stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:

stress-ng --vm-bytes $(awk '/MemFree/{printf "%dn", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1

For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping:

stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%dn", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1

Adapt the /proc/meminfo call with free(1)/vm_stat(1)/etc. if you need it portable. See also the reference wiki for stress-ng for further usage examples.

Answered By: tkrennwa

I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don’t need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.

Answered By: Craig Barnes

If you have basic GNU tools (head and tail) or BusyBox on Linux, you can do this to fill a certain amount of free memory:

head -c BYTES /dev/zero | tail
head -c 5000m /dev/zero | tail #~5GB, portable
head -c 5G    /dev/zero | tail #5GiB on GNU (not busybox)

This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be infinitely long, but is limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM head and tail itself use on your system and subtract that.

To just quickly run out of RAM completely, you can remove the limiting head part:

tail /dev/zero

If you want to also add a duration, this can be done quite easily in bash (will not work in sh):

cat <(head -c 500m /dev/zero) <(sleep SECONDS) | tail

<(command) tells the interpreter to run command and make its output appear as a file, hence echo <(true) will output a file processor e.g. /dev/fd/63, so for cat it will seem like it gets passed two files, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html

The cat command will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep tail alive.

If you have pv and want to slowly increase RAM use:

head -c TOTAL /dev/zero | pv -L BYTES_PER_SEC | tail
head -c 1000m /dev/zero | pv -L 10m | tail

The latter will use up to one gigabyte at a rate of ten megabytes per second. As an added bonus, pv will show the current rate of use and the total use so far. Of course this can also be done with previous variants:

head -c 500m /dev/zero | pv | tail

Just inserting the | pv | part will show you the current status (throughput and total by default).

Compatibility hints and alternatives
If you do not have a /dev/zero device, the standard yes and tr tools might substitute: yes | tr \n x | head -c BYTES | tail (yes outputs an infinite amount of "yes"es, tr substitutes the newline such that everything becomes one huge line and tail needs to keep all that in memory).
Another, simpler alternative is using dd: dd if=/dev/zero bs=1G of=/dev/null uses 1GB of memory on GNU and BusyBox, but also 100% CPU on one core.
Finally, if your head does not accept a suffix, you can calculate an amount of bytes inline, for example 50 megabytes: head -c $((1024*1024*50))


Credits to falstaff for contributing a variant that is even simpler and more broadly compatible (like with BusyBox).


Why another answer? The accepted answer recommends installing a package (I bet there’s a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone’s internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I’d add my own, also because it’s shorter to remember or type over if you don’t see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we’re all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.

Answered By: Luc

This program works very well for allocating a fixed amount of memory:

https://github.com/julman99/eatmemory

Answered By: Aleksandr Dubinsky

I need to have 90% of the free memory full

In case there are not enough answers already, one I did not see is doing a ramdisk, or technically a tmpfs. This will map RAM to a folder in linux, and then you just create or dump however many files of whatever size in there to take up however much ram you want. The one downside is you need to be root to use the mount command

# first as root make the given folder, however you like where the tmpfs mount is going to be.

mkdir /ramdisk

chmod 777 /ramdisk

mount -t tmpfs -o size=500G tmpfs /ramdisk

# change 500G to whatever size makes sense; in my case my server has 512GB of RAM installed.

Obtain or copy or create a file of reasonable size; create a 1GB file for example then

cp my1gbfile /ramdisk/file001
cp my1gbfile /ramdisk/file002

# do 450 times; 450 GB of 512GB approx 90%

use free -g to observe how much RAM is allocated.

Note: having 512GB physical ram for example, and if you tmpfs more than 512gb it will work, and allow you freeze/crash the system by allocating 100% of the RAM. For that reason it is advisable to only tmpfs so much RAM that you leave some reasonable amount free for the system.

To create a single file of a given size:

truncate -s 450G my450gbfile

# man truncate

# also dd works well

dd if=/dev/zero of=my456gbfile bs=1GB count=456
Answered By: ron

with just dd. This continuously reads and allocates 10GB RES:

dd if=/dev/zero of=/dev/null iflag=fullblock bs=10G 

To just allocate once, add count=1 The downside is it is cpu heavy.

Answered By: sivann

This expands @tkrennwa’s answer:

You may not wish to spin 100% cpu during the test, which stress-ng does by default.

This invocation will not spin the CPU, but it will allocate 4g of ram, page lock it (so it can’t swap), and then wait forever (ie, until ctrl-c):

stress-ng --vm-bytes 4g --vm-keep -m 1 --vm-ops 1 --vm-hang 0 --vm-locked
  • --vm-ops N – stop vm workers after N bogo operations.
  • --vm-hang N – sleep N seconds before unmapping memory, the default is zero seconds. Specifying 0 will do an infinite wait.
  • --vm-locked – Lock the pages of the mapped region into memory using mmap MAP_LOCKED (since Linux 2.5.37).

Also, since you are just eating memory, you might want to --vm-madvise hugepage to use "huge pages" (typically 2MB instead of 4k). This is notably faster when freeing pages after CTRL-C because far fewer pages occupy the pagetable:

]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1  --vm-locked --vm-madvise hugepage
stress-ng: info:  [3107579] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info:  [3107579] dispatching hogs: 1 vm
stress-ng: info:  [3107579] successful run completed in 17.15s

real    0m17.186s   <<<<<< with huge pages
user    0m2.481s
sys 0m14.453s
]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1  --vm-locked 
stress-ng: info:  [3108342] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info:  [3108342] dispatching hogs: 1 vm
stress-ng: info:  [3108342] successful run completed in 36.52s

real    0m36.555s   <<<<<< without huge pages
user    0m2.598s
sys 0m33.538s
Answered By: KJ7LNW
Categories: Answers Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.