Why is /dev/null a file? Why isn't its function implemented as a simple program?

I am trying to understanding the concept of special files on Linux. However, having a special file in /dev seems plain silly when its function could be implemented by a handful of lines in C to my knowledge.

Moreover you could use it in pretty much the same manner, i.e. piping into null instead of redirecting into /dev/null. Is there a specific reason for having it as a file? Doesn’t making it a file cause many other problems like too many programs accessing the same file?

Asked By: Ankur S

||

In fairness, it’s not a regular file per se; it’s a character special device:

$ file /dev/null
/dev/null: character special (3/2)

It functioning as a device rather than as a file or program means that it’s a simpler operation to redirect input to or output from it, as it can be attached to any file descriptor, including standard input/output/error.

Answered By: DopeGhoti

I think /dev/null is a character device (that behaves like an ordinary file) instead of a programm for performance reasons.

If it would be a program it would require loading, starting, scheduling, running, and afterwards stopping and unloading the program. The simple C program you are describing would of course not consume a lot of ressources, but I think it makes a significant difference when considering a large number (say millions) of redirect / piping actions as process management operations are costly on a large scale as they involve context switches.

Another assumption: Piping into an program requires memory to be allocated by the receiving program (even if it is discarded directly afterwards). So if you pipe into the tool you have the double memory consumption, once on the sending program and again on the receiving program.

Answered By: user5626466

In addition to the performance benefits of using a character-special device, the primary benefit is modularity. /dev/null may be used in almost any context where a file is expected, not just in shell pipelines. Consider programs that accept files as command-line parameters.

# We don't care about log output.
$ frobify --log-file=/dev/null

# We are not interested in the compiled binary, just seeing if there are errors.
$ gcc foo.c -o /dev/null  || echo "foo.c does not compile!".

# Easy way to force an empty list of exceptions.
$ start_firewall --exception_list=/dev/null

These are all cases where using a program as a source or sink would be extremely cumbersome. Even in the shell pipeline case, stdout and stderr may be redirected to files independently, something that is difficult to do with executables as sinks:

# Suppress errors, but print output.
$ grep foo * 2>/dev/null
Answered By: ioctl

I suspect the why has a lot to do with the vision/design that shaped Unix (and consequently Linux), and the advantages stemming from it.

No doubt there’s a non-negligible performance benefit to not spinning up an extra process, but I think there’s more to it: Early Unix had an “everything is a file” metaphor, which has a non-obvious but elegant advantage if you look at it from a system perspective, rather than a shell scripting perspective.

Say you have your null command-line program, and /dev/null the device node. From a shell-scripting perspective, the foo | null program is actually genuinely useful and convenient, and foo >/dev/null takes a tiny bit longer to type and can seem weird.

But here’s two exercises:

  1. Let’s implement the program null using existing Unix tools and /dev/null – easy: cat >/dev/null. Done.

  2. Can you implement /dev/null in terms of null?

You’re absolutely right that the C code to just discard input is trivial, so it might not yet be obvious why it’s useful to have a virtual file available for the task.

Consider: almost every programming language already needs to work with files, file descriptors, and file paths, because they were part of Unix’s “everything is a file” paradigm from the beginning.

If all you have are programs that write to stdout, well, the program doesn’t care if you redirect them into a virtual file that swallows all writes, or a pipe into a program that swallows all writes.

Now if you have programs that take file paths for either reading or writing data (which most programs do) – and you want to add “blank input” or “discard this output” functionality to those programs – well, with /dev/null that comes for free.

Notice that the elegance of it is that is reduces the code complexity of all involved programs – for each common-but-special usecase that your system can provide as a “file” with an actual “filename”, your code can avoid adding custom command-line options and custom code paths to handle.

Good software engineering often depends on finding good or “natural” metaphors for abstracting some element of a problem in a way that becomes easier to think about but remains flexible, so that you can solve basically the same range of higher-level problems without having to spend the time and mental energy on reimplementing solutions to the same lower-level problems constantly.

“Everything is a file” seems to be one such metaphor for accessing resources: You call open of a given path in a heirarchical namespace, getting a reference (file descriptor) to the object, and you can read and write, etc on the file descriptors. Your stdin/stdout/stderr are also file descriptors that just happened to be pre-opened for you. Your pipes are just files and file descriptors, and file redirection lets you glue all these pieces together.

Unix succeeded as much as it did in part because of how well these abstractions worked together, and /dev/null is best understood as part of that whole.


P.S. It’s worth looking at the Unix version of “everything is a file” and things like /dev/null as the first steps towards a more flexible and powerful generalization of the metaphor that has been implemented in many systems that followed.

For example, in Unix special file-like objects like /dev/null had to be implemented in the kernel itself, but it turns out that it’s useful enough to expose functionality in file/folder form that since then multiple systems have been made that provide a way for programs to do that.

One of the first was the Plan 9 operating system, made by some of the same people who made Unix. Later, GNU Hurd did something similar with its “translators”. Meanwhile, Linux ended up getting FUSE (which has spread to the other mainstream systems by now as well).

Answered By: mtraceur

Aside from “everything is a file” and hence ease of usage everywhere that most other answers are based on, there is also performance issue as @user5626466 mentions.

To show in practice, we’ll create simple program called nullread.c:

#include <unistd.h>
char buf[1024*1024];
int main() {
        while (read(0, buf, sizeof(buf)) > 0);
}

and compile it with gcc -O2 -Wall -W nullread.c -o nullread

(Note: we cannot use lseek(2) on pipes, so only way to drain the pipe is to read from it until it is empty).

% time dd if=/dev/zero bs=1M count=5000 |  ./nullread
5242880000 bytes (5,2 GB, 4,9 GiB) copied, 9,33127 s, 562 MB/s
dd if=/dev/zero bs=1M count=5000  0,06s user 5,66s system 61% cpu 9,340 total
./nullread  0,02s user 3,90s system 41% cpu 9,337 total

whereas with standard /dev/null file redirection we get much better speeds (due to facts mentioned: less context switching, kernel just ignoring data instead of copying it etc):

% time dd if=/dev/zero bs=1M count=5000 > /dev/null
5242880000 bytes (5,2 GB, 4,9 GiB) copied, 1,08947 s, 4,8 GB/s
dd if=/dev/zero bs=1M count=5000 > /dev/null  0,01s user 1,08s system 99% cpu 1,094 total

(this should be a comment there, but is too big for that and would be completely unreadable)

Answered By: Matija Nalis

Your question is posed as if something would be gained perhaps in simplicity by using a null program in lieu of a file. Perhaps we can get rid of the notion of “magic files” and instead have just “ordinary pipes”.

But consider, a pipe is also a file. They’re normally not named, and so can only be manipulated through their file descriptors.

Consider this somewhat contrived example:

$ echo -e 'foonbarnbaz' | grep foo
foo

Using Bash’s process substitution we can accomplish the same thing through a more roundabout way:

$ grep foo <(echo -e 'foonbarnbaz')
foo

Replace the grep for echo and we can see under the covers:

$ echo foo <(echo -e 'foonbarnbaz')
foo /dev/fd/63

The <(...) construct is just replaced with a filename, and grep thinks it’s opening any old file, it just happens to be named /dev/fd/63. Here, /dev/fd is a magic directory that makes named pipes for every file descriptor possessed by the file accessing it.

We could make it less magic with mkfifo to make a named pipe that shows up in ls and everything, just like an ordinary file:

$ mkfifo foofifo
$ ls -l foofifo 
prw-rw-r-- 1 indigo indigo 0 Apr 19 22:01 foofifo
$ grep foo foofifo

Elsewhere:

$ echo -e 'foonbarnbaz' > foofifo

and behold, grep will output foo.

I think once you realize pipes and regular files and special files like /dev/null are all just files, it’s apparent implementing a null program is more complex. The kernel has to handle writes to a file either way, but in the case of /dev/null it can just drop the writes on the floor, whereas with a pipe it has to actually transfer the bytes to another program, which then has to actually read them.

Answered By: Phil Frost

I hope that you are also aware of the /dev/chargen /dev/zero and others like them including /dev/null.

LINUX/UNIX has a few of these – made available so that people can make good use of WELL WRITTEN CODE FrAGMEnTS.

Chargen is designed to generate a specific and repeating pattern of characters – it is quite fast and would push the limits of serial
devices and it would help debug serial protocols that were written and failed some test or other.

Zero is designed to populate an existing file or output a whole lot of zero’s

/dev/null is just another tool with the same idea in mind.

All of these tools in your toolkit means that you have half a chance at making an existing program do something unique without regard to their
(your specific need) as devices or file replacements

Lets set up a contest to see who can produce the most exciting result given only the few character devices in your version of LINUX.

Answered By: helpful

I would argue that there is a security issue beyond historical paradigms and performance. Limiting the number of programs with privileged execution credentials, no matter how simple, is a fundamental tenet of system security. A replacement /dev/null would certainly be required to have such privileges due to use by system services. Modern security frameworks do an excellent job preventing exploits, but they aren’t foolproof. A kernel driven device accessed as a file is much more difficult to exploit.

Answered By: NickW

As others already pointed out, /dev/null is a program made of a handful of lines of code. It’s just that these lines of code are part of the kernel.

To make it clearer, here’s the Linux implementation: a character device calls functions when read or written to. Writing to /dev/null calls write_null, while reading calls read_null, registered here.

Literally a handful lines of code: these functions do nothing. You’d need more lines of code than fingers on your hands only if you count functions other than read and write.

Answered By: Matthieu Moy
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.