Why is number of open files limited in Linux?
Right now, I know how to:
- find open files limit per process:
- count all opened files by all processes:
lsof | wc -l
- get maximum allowed number of open files:
My question is: Why is there a limit of open files in Linux?
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource – especially on embedded systems.
As root user you can change the maximum of the open files count per process (via
ulimit -n) and per system (e.g.
echo 800000 > /proc/sys/fs/file-max).
I think it’s largely for historical reasons.
A Unix file descriptor is a small
int value, returned by functions like
creat, and passed to
close, and so forth.
At least in early versions of Unix, a file descriptor was simply an index into a fixed-size per-process array of structures, where each structure contains information about an open file. If I recall correctly, some early systems limited the size of this table to 20 or so.
More modern systems have higher limits, but have kept the same general scheme, largely out of inertia.
Please note that
lsof | wc -l sums up a lot of duplicated entries (forked processes can share file handles etc). That number could be much higher than the limit set in
To get the current number of open files from the Linux kernel’s point of view, do this:
Example: This server has 40096 out of max 65536 open files, although lsof reports a much larger number:
# cat /proc/sys/fs/file-max
# cat /proc/sys/fs/file-nr
40096 0 65536
# lsof | wc -l