When to use /dev/random vs /dev/urandom
Should I use
In which situations would I prefer one over the other?
/dev/urandom for most practical purposes.
The longer answer depends on the flavour of Unix that you’re running.
As @DavidSchwartz pointed out in a comment, using
/dev/urandom is preferred in the vast majority of cases. He and others also provided a link to the excellent Myths about
/dev/urandom article which I recommend for further reading.
- The manpage is misleading.
- Both are fed by the same CSPRNG to generate randomness (diagrams 2 and 3)
/dev/randomblocks when it runs out of entropy,
so reading from
/dev/randomcan halt process execution.
- The amount of entropy is conservatively estimated, but not counted
/dev/urandomwill never block.
- In rare cases very shortly after boot, the CSPRNG may not have had enough entropy to be properly seeded and
/dev/urandommay not produce high-quality randomness.
- Entropy running low is not a problem if the CSPRNG was initially seeded properly.
- The CSPRNG is being constantly re-seeded.
- In Linux 4.8 and onward,
/dev/urandomdoes not deplete the entropy pool (used by
/dev/random) but uses the CSPRNG output from upstream.
Exceptions to the rule
In the Cryptography Stack Exchange’s When to use
/dev/urandom in Linux
@otus gives two use cases:
Shortly after boot on a low entropy device, if enough entropy has not yet been generated to properly seed
If you’re worried about (1), you can check the entropy available in
If you’re doing (2) you’ll know it already 🙂
Note: You can check if reading from /dev/random will block, but beware of possible race conditions.
Alternative: use neither!
@otus also pointed out that the
getrandom() system will read from
/dev/urandom and only block if the initial seed entropy is unavailable.
There are issues with changing
/dev/urandom to use
getrandom(), but it is conceivable that a new
/dev/xrandom device is created based upon
It doesn’t matter, as
macOS uses 160-bit Yarrow based on SHA1. There is no difference between /dev/random and /dev/urandom; both behave identically. Apple’s iOS also uses Yarrow.
It doesn’t matter, as Wikipedia says:
/dev/urandomis just a link to
/dev/randomand only blocks until properly seeded.
This means that after boot, FreeBSD is smart enough to wait until enough seed entropy has been gathered before delivering a never-ending stream of random goodness.
/dev/urandom, assuming your system has read at least once from
/dev/random to ensure proper initial seeding.
The rnd(4) manpage says:
/dev/randomsometimes blocks. Will block early at boot if the
system’s state is known to be predictable.
Applications should read from
/dev/urandomwhen they need randomly
generated data, e.g. cryptographic keys or seeds for simulations.
Systems should be engineered to judiciously read at least once from
/dev/randomat boot before running any services that talk to the
internet or otherwise require cryptography, in order to avoid
generating keys predictably.
Traditionally, the only difference between
/dev/random is what happens when kernel thinks there is no entropy in the system –
/dev/random fails closed,
/dev/urandom fails open. Both drivers were sourcing entropy from
/drivers/char/random.c for specifics.
Edited to add: As of Linux 4.8
/dev/urandom was reworked to use CSPRNG.
So when should you fail closed? For any kind of cryptographic use, specifically seeding DRBG. There is a very good paper explaining consequences of using
/dev/urandom when generating RSA keys and not having enough entropy. Read Mining Your Ps and Qs.
This is somewhat of a “me too” answer, but it strengthens Tom Hale’s recommendation. It squarely applies to Linux.
- Don’t use
According to Theodore Ts’o on the Linux Kernel Crypto mailing list,
/dev/random has been deprecated for a decade. From Re: [RFC PATCH v12 3/4] Linux Random Number Generator:
Practically no one uses /dev/random. It’s essentially a deprecated
interface; the primary interfaces that have been recommended for well
over a decade is /dev/urandom, and now, getrandom(2).
We regularly test
/dev/random and it suffers frequent failures. The test performs the three steps: (1) drain
/dev/random by asking for 10K bytes in non-blocking mode; (2) request 16 bytes in blocking mode (3) attempt to compress the block to see if its random (poor man’s test). The test takes minutes to complete.
The problem is so bad on Debain systems (i686, x86_64, ARM, and MIPS) we asked GCC Compile Farm to install the
rng-tools package for their test machines. From Install rng-tools on gcc67 and gcc68:
I would like to request that rng-tools be installed on gcc67 and
gcc68. They are Debian systems, and /dev/random suffers entropy
depletion without rng-tools when torture testing libraries which
utilize the device.
The BSDs and OS X appear OK. The problem is definitely Linux.
It might also be worth mentioning Linux does not log generator failures. They did not want the entries filling up the system log. To date, most failures are silent and go undetected by most users.
The situation should be changing shortly since the kernel is going to print at least one failure message. From [PATCH] random: silence compiler warnings and fix race on the kernel crypto mailing list:
Specifically, I added
depends on DEBUG_KERNEL. This means that these
useful warnings will only poke other kernel developers. This is probably
exactly what we want. If the various associated developers see a warning
coming from their particular subsystem, they’ll be more motivated to
fix it. Ordinary users on distribution kernels shouldn’t see the
warnings or the spam at all, since typically users aren’t using
I think it is a bad idea to suppress all messages from a security
engineering point of view.
Many folks don’t run debug kernels. Most of the users who want or need
to know of the issues won’t realize its happening. Consider, the
reason we learned of systemd’s problems was due to dmesg’s.
Suppressing all messages for all configurations cast a wider net than
necessary. Configurations that could potentially be detected and fixed
likely will go unnoticed. If the problem is not brought to light, then
it won’t be fixed.
I feel like the kernel is making policy decisions for some
organizations. For those who have hardware that is effectively
unfixable, then organization has to decide what to do based on their
risk adversity. They may decide to live with the risk, or they may
decide to refresh the hardware. However, without information on the
issue, they may not even realize they have an actionable item.
The compromise eventually reached later in the thread was at least one dmesg per calling module.