How to trigger action on low-memory condition in Linux?
So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won’t fail to fork() or suffer from any of the other usual OOM issues).
I found the OOM killer, which I understand is useful, but which doesn’t really do what I’d need to do.
Ideally, if I’m running out of memory, I want to know why.
I suppose I could write my own program that runs on startup and uses a fixed amount of memory, then only does stuff once it gets informed of low memory by the kernel, but that brings up its own question…
Is there even a syscall to be informed of something like that?
A way of saying to the kernel “hey, wake me up when we’ve only got 128 MB of memory left”?
I searched around the web and on here but I didn’t find anything fitting that description. Seems like most people use polling on a time delay, but the obvious problem with that is it makes it way less likely you’ll be able to know which process(es) caused the problem.
What you are asking is, basically, a kernel-based callback on a low-memory condition, right? If so, I strongly believe that the kernel does not provide such mechanism, and for a good reason: being low on memory, it should immediately run the only thing that can free some memory – the OOM killer. Any other programs can bring the machine to an halt.
Anyway, you can run a simple monitoring solution in userspace. I had the same low-memory debug/action requirement in the past, and I wrote a simple bash which did the following:
-
monitor for a soft watermark: if memory usage is above this threshold, collect some statistics (processes, free/used memory, etc) and send a warning email;
-
monitor for an hard watermark: if memory usage is above this threshold, collect some statistics and kill the more memory hungry (or less important) processes, then send an alert email.
Such a script would be very lightweight, and it can poll the machine at small interval (ie: 15 seconds)
Yes, the Linux kernel does provide a mechanism for this: memory pressure notification. This is documented in https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt, section Memory Pressure.
In short, you register an eventfd file descriptor in /sys/fs/cgroup/memory/memory.pressure_level
on which you want to receive notifications. These notifications can be low
, medium
, or critical
. A typical use case would be to free some or all internal caches in your process when you receive a notification, in order to prevent an impending OOM kill.
The current best answer is for cgroups-v1. For cgroups-v2, one can listen for file modified events on the memory.events
file (documentation of the file content).
The behaviour of this file can actually be tested with a few shell commands:
# Spawn a new slice with memory limits to avoid OOMing the entire system
systemd-run --pty --user -p MemoryMax=1050M -p MemoryHigh=1000M bash
# Watch memory.events for changes and read when changed
inotifywait -e modify -m /sys/fs/cgroup$(cut -d: -f3 /proc/self/cgroup)/memory.events
| while read l; do echo $l; cat ${l// *}; done &
# Consume memory
tail /dev/zero
Sadly, this seems to only work if there’s actually a memory limit set for the cgroup. As an alternative, one can listen to memory.pressure, but that’s not cgroup-based (at least for non-root users), and not quite as quick-reacting.