Limiting processes to not exceed more than 10% of CPU usage

I operate a Linux system which has a lot of users but sometimes an abuse occurs; where a user might run a single process that uses up more than 80% of the CPU/Memory.

So is there a way to prevent this from happening by limiting the amount of CPU usage a process can use (to 10% for example)? I’m aware of cpulimit, but it unfortunately applies the limit to the processes I instruct it to limit (e.g single processes). So my question is, how can I apply the limit to all of the running processes and processes that will be run in the future without the need of providing their id/path for example?

Asked By: Giovanni Mounir

||

Since you are stating that cpulimit would not be practical in your case, then I suggest you look at nice, renice, and taskset, which may come close to what you want to achieve, although taskset allows to set a processes’s CPU affinity, so it might be not immediately helpful in your case.

Answered By: R J

For memory, what you are looking for is ulimit -v. Note that ulimit is inherited by child processes, so if you apply it to the login shell of the user at the time of login, it applies to all his processes.

If your users all use bash as login shell, putting the following line in /etc/profile should cause all user processes to have a hard limit of 1 gigabyte (more exactly, one million kilobytes):

ulimit -vH 1000000

The option H makes sure it’s a hard limit, that is, the user cannot set it back up afterwards. Of course the user can still fill memory by starting sufficiently many processes at once.

For other shells, you’ll have to find out what initialization files they read instead (and what other command instead of ulimit they use).

For CPU, what you wish for doesn’t seem to make sense for me. What would be the use of letting 90% of the CPU unused when only one process is running? I think what you really want is nice (and possibly ionice). Note that, like ulimit, nice values are inherited by child processes, so applying it to the login shell at login time suffices. I guess that also applies to ionice but I’m not sure.

Answered By: celtschk

Did you look at cgroups? There is some information on the Arch Wiki about them. Read the section about cpu.shares, it looks like it’s doing what you need, and they can operate on a user-level, so you can limit all user processes at once.

Answered By: Paul Schyska

While it can be an abuse for memory, it isn’t for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn’t waiting for I/O or something else) will take 100% CPU time by default. And there’s no reason to enforce a limit.

Now, you can set up priorities thanks to nice. If you want them to apply to all processes for a given user, you just need to make sure that the user’s login shell is run with nice: the child processes will inherit the nice value. This depends on how the users log in. See Prioritise ssh logins (nice) for instance.

Alternatively, you can set up virtual machines. Indeed setting a per-process limit doesn’t make much sense since the user can start many processes, abusing the system. With a virtual machine, all the limits will be global to the virtual machine.

Another solution is to set /etc/security/limits.conf limits; see the limits.conf(5) man page. For instance, you can set the maximum CPU time per login and/or the maximum number of processes per login. You can also set maxlogins to 1 for each user.

Answered By: vinc17

nice / renice

nice is a great tool for ‘one off’ tweaks to a system.

 nice COMMAND

cpulimit

cpulimit if you need to run a CPU intensive job and having free CPU time is essential for the responsiveness of a system.

cpulimit -l 50 -- COMMAND

cgroups

cgroups apply limits to a set of processes, rather than to just one

cgcreate -g cpu:/cpulimited
cgset -r cpu.shares=512 cpulimited
cgexec -g cpu:cpulimited COMMAND_1
cgexec -g cpu:cpulimited COMMAND_2
cgexec -g cpu:cpulimited COMMAND_3

Resources

http://blog.scoutapp.com/articles/2014/11/04/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups
http://manpages.ubuntu.com/manpages/xenial/man1/cpulimit.1.html

Answered By: RafaSashi

If you want to limit the processes that are already started, you will have to do it one by one by PID, but you can have a batch script to do that like the one below:

#!/bin/bash
LIMIT_PIDS=$(pgrep tesseract)   # PIDs in queue replace tesseract with your name
echo $LIMIT_PIDS
for i in $LIMIT_PIDS
do
    cpulimit -p $i -l 10 -z &   # to 10 percent processes
done

In my case pypdfocr launches the greedy tesseract.

Also in some cases were your CPU is pretty good you can just use a renice like this:

watch -n5 'pidof tesseract | xargs -L1 sudo renice +19'
Answered By: Eduard Florinescu

Since your tags have centos, you can use systemd.

For example if you want to limit user with ID of 1234:


sudo systemctl edit --force user-1234.slice

Then type and save this:


[Slice]
CPUQuota=10%

Next time that user logs in, it will affect.

Man pages: systemctl, systemd.slice, systemd.resource-control

Disclaimer: this answer is for benefit of those who find that QA and want to control processes run by themselves and want to limit CPU usage regardless of current total loads of the system.

I’ve found on my Linux Mint cpulimit did not help. Two ways worked though:

  1. cputool: e.g. cputool -c 10 -- stress -c 4

(stress is (IMO) small useful testing tool to stress test the system)

Downside: cannot as easily change usage once started.

  1. cgroups

Code (surprisingly if I delete this line code formatting below messes up):

sudo cgcreate -g cpu:mygroup1
cat /sys/fs/cgroup/mygroup1/cpu.max # not necessary, tried to find reason for error and tech details for reference 
max 100000
sudo cgset -r cpu.max="200000 100000" mygroup1 
sudo cgexec -g cpu:mygroup1 sudo -u username1 -g groupname1 stress -c 4
stress: info: [125425] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

User cannot start process with cgroups unless gives access rights, I have not learned how to add rights (I hope it is possible) but instead use sudo.

# in other terminal to change usage (this syntax changes 1st value only):
sudo cgset -r cpu.max=100000 mygroup1

Notes for cgroups:

cpu.max has two values: the first value is the allowed time quota in microseconds for which all processes collectively in a child group can run during one period. The second value specifies the length of the period. For multicore/multiprocessor systems 1st is quota for all cores, 2nd is for one, so setting 1st two times greater than 2nd is expected to result in CPU usage of 2 divided by total number of cores.

On my system min valid value is 1000, max is 1000000.

Using second value of 100000 (default) resulted in additional 2x-3x speed penalty when I run ffmpeg, using 1000000 resulted in no noticable penalty.

Surprise for me – why for GHz processors interrupts each hundred milliseconds matter so much but each second is not?

cgroups can be used w/out cgcreate, cgset, cgexec (they are in cgroup-tools package which for Linux Mint distro required additional installation). IMO good description how to do that:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-cgroups-v2-to-control-distribution-of-cpu-time-for-applications_managing-monitoring-and-updating-the-kernel, how to start a process in cgroup: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/starting_a_process.

Answered By: Martian2020
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.