Why is number of open files limited in Linux?

Right now, I know how to:

  • find open files limit per process: ulimit -n
  • count all opened files by all processes: lsof | wc -l
  • get maximum allowed number of open files: cat /proc/sys/fs/file-max

My question is: Why is there a limit of open files in Linux?

Asked By: xanpeng

||

The reason is that the operating system needs memory to manage each open file, and memory is a limited resource – especially on embedded systems.

As root user you can change the maximum of the open files count per process (via ulimit -n) and per system (e.g. echo 800000 > /proc/sys/fs/file-max).

Answered By: jofel

I think it’s largely for historical reasons.

A Unix file descriptor is a small int value, returned by functions like open and creat, and passed to read, write, close, and so forth.

At least in early versions of Unix, a file descriptor was simply an index into a fixed-size per-process array of structures, where each structure contains information about an open file. If I recall correctly, some early systems limited the size of this table to 20 or so.

More modern systems have higher limits, but have kept the same general scheme, largely out of inertia.

Answered By: Keith Thompson

Please note that lsof | wc -l sums up a lot of duplicated entries (forked processes can share file handles etc). That number could be much higher than the limit set in /proc/sys/fs/file-max.

To get the current number of open files from the Linux kernel’s point of view, do this:

cat /proc/sys/fs/file-nr

Example: This server has 40096 out of max 65536 open files, although lsof reports a much larger number:

# cat /proc/sys/fs/file-max
65536
# cat /proc/sys/fs/file-nr 
40096   0       65536
# lsof | wc -l
521504
Answered By: grebneke
Categories: Answers Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.