A system call for random numbers: getrandom()
The Linux kernel already provides several ways to get random numbers, each with its own set of constraints. But those constraints may make it impossible for a process to get random data when it needs it. The LibreSSL project has recently been making some noise about the lack of a "safe" way to get random data under Linux. That has led Ted Ts'o to propose a new getrandom() system call that would provide LibreSSL with what it needs, while also solving other kernel random number problems along the way.
The kernel maintains random number "pools" that get fed data that comes from sampling unpredictable events (e.g. inter-key timing from the keyboard). The amount of entropy contributed by each of these events is estimated and tracked. A cryptographically secure pseudo-random number generator (PRNG) is used on the data in the pools, which then feed two separate devices: /dev/urandom and /dev/random.
The standard way to get random numbers from the kernel is by reading from the /dev/urandom device. But there is also the /dev/random device that will block until enough entropy has been collected to satisfy the read() request. /dev/urandom should be used for essentially all random numbers required, but /dev/random is sometimes used for things like extremely sensitive, long-lived keys (e.g. GPG) or one-time pads. In order to use either one, though, an application has to be able to open() a file, which requires that there be file descriptors available. It also means that the application has to be able to see the device files, which may not be the case in some containers or chroot() environments.
LibreSSL has been written to use /dev/urandom, but also to have a fallback if there is an exhaustion of file descriptors (which an attacker might try to arrange) or there is some other reason that the library can't open the file. The fallback is to use the deprecated sysctl() system call to retrieve the /proc/sys/kernel/random/uuid value, but without actually having to open that file (since LibreSSL already knows that /dev/urandom could not be opened). But sysctl() may disappear someday—some distribution kernels have already removed it—and, sometimes, using it puts a warning into the kernel log. If the sysctl() fails, LibreSSL falls further back to a scary-looking function that tries to generate its own random numbers from various (hopefully) unpredictable values available to user space (e.g. timestamps, PID numbers, etc.).
All of that can be seen in a well-commented chunk of code in LibreSSL's getentropy_linux.c file. The final comment in that section makes a request:
* We hope this demonstrates that Linux should either retain their * sysctl ABI, or consider providing a new failsafe API which * works in a chroot or when file descriptors are exhausted. */
That new API is precisely what Ts'o has proposed. The getrandom() system call is well-described in his patch (now up to version 4). It is declared as follows:
#include <linux/random.h> int getrandom(void *buf, size_t buflen, unsigned int flags);
A call will fill buf with up to buflen bytes of random data that can be used for cryptographic purposes, returning the number of bytes stored. As might be guessed, the flags parameter will alter the behavior of the call. In the case where flags == 0, getrandom() will block until the /dev/urandom pool has been initialized. If flags is set to GRND_NONBLOCK, then getrandom() will return -1 with an error number of EAGAIN if the pool is not initialized.
The GRND_RANDOM flag bit can be used to switch to the /dev/random pool, subject to the entropy requirements of that pool. That means the call will block until the pool has the required entropy, unless the GRND_NONBLOCK bit is also present in flags, in which case it will return as many bytes as it can; it will return -1 for an error with errno set to EAGAIN if there is no entropy available at all. The call returns the number of bytes it placed into buf (or -1 for an error). Short reads can occur due to a lack of entropy for the /dev/random pool or because the call was interrupted by a signal, but reads of 256 bytes or less from /dev/urandom are guaranteed to return the full request once that device has been initialized.
In the proposed man page that accompanies the patch, Ts'o shows sample code
that could be used to emulate the OpenBSD getentropy() system call
using getrandom(). One complaint
about the patch came from Christoph Hellwig, who was concerned that Ts'o
was not just implementing "exactly the same system call
" as
OpenBSD. He continued: "Having slightly different names and semantics for the same
functionality is highly annoying.
" But Ts'o is trying to solve more
than just the LibreSSL problem, he said.
getrandom() is meant to be a superset of OpenBSD's
getentropy()—glibc can easily create a compatible
getentropy(), as he
showed in the patch.
The requirement that /dev/urandom be initialized before getrandom() will return any data from that pool is one of the new features that the proposed system call delivers. Currently, there is no way for an application to know that at least 128 bits of entropy have been gathered since the system was booted (which is the requirement to properly initialize /dev/urandom). Now, an application can either block to wait for that to occur, or test for the condition using GRND_NONBLOCK and looking for EAGAIN. Since the behavior of /dev/urandom is part of the kernel ABI, it could not change, but adding this blocking to the new system call is perfectly reasonable.
The system call also provides a way to do a non-blocking read of /dev/random to get a partial buffer in the event of a lack of entropy. It is a bit hard to see any real application for that—if you don't need a full buffer of high-estimated-entropy random numbers, why ask for one? In fact, the new call provides a number of ways to abuse the kernel's random number facility (requesting INT_MAX bytes, for example), but that isn't really any different than the existing interfaces.
There have been lots of comments of various sorts on Ts'o's patches, but few complaints. The overall idea seems to make sense to those participating in the thread, anyway. Some changes have been made based on the comments, most notably switching to blocking by default. But the latest revision generated only comments about typos. Unless that changes, it would seem that we could see getrandom() in the kernel rather soon, perhaps as early as 3.17.
Index entries for this article | |
---|---|
Kernel | Random numbers |
Kernel | Security/Random number generation |
Security | Linux kernel |
Security | Random number generation |
Posted Jul 24, 2014 3:15 UTC (Thu)
by busterb (subscriber, #560)
[Link] (23 responses)
I wonder which Linux systems don't explicitly seed urandom on boot, or do not do it early enough for a getrandom caller to hit the uninitialized condition. It seems plausible that a number of embedded systems would forget, or have predictable initial seeding mechanisms. It'd be an interesting thing to research.
I had never heard of pollinate until I started searching for how Ubuntu seeds itself:
Others load seed data from a file that can be written by the installer or the previous boot. It might pay to be careful with cloned system images though:
https://dev.openwrt.org/browser/trunk/openwrt/target/defa...
Posted Jul 24, 2014 5:07 UTC (Thu)
by dlang (guest, #313)
[Link] (22 responses)
Linux tries to do this. It initializes the entropy pool very early in the boot process, and works hard to populate it as quickly as possible
However, on some system, there just isn't much randomness around, and on those systems it is possible that you won't have enough when you try to use it early in the boot process.
And if you have applications that drain the pool by requesting too much randomness, you can run out, even on good systems.
Posted Jul 24, 2014 5:32 UTC (Thu)
by dlang (guest, #313)
[Link] (20 responses)
this of course only applies to random not urandom
Posted Jul 25, 2014 5:58 UTC (Fri)
by ncm (guest, #165)
[Link] (19 responses)
But you don't really run out, as such. Rather, you get a decreasing amount of entropy with each read. Reads from /dev/random just block when it judges that the entropy it can deliver has been stretched too thin.
Posted Jul 25, 2014 18:33 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (18 responses)
Posted Jul 25, 2014 19:14 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (17 responses)
Actually, it does. It doesn't run out of *pseudo*-random numbers, but real random numbers are hard to come by and in limited supply. Given enough data out of the PRNG relative to the size of the entropy pool, it is possible, at least in theory, to reverse-engineer the PRNG's internal state and predict which numbers it will produce next.
So far as I know this attack has never been successful in practice, assuming a properly seeded PRNG. There is some concern when the system is starved for sources of randomness, primarily in embedded devices, since that can drastically reduce the search space.
Posted Jul 25, 2014 20:01 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (3 responses)
Posted Jul 25, 2014 23:10 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (2 responses)
Of course, the practical difference between an ideal PRNG with 256+ bits of internal state, seeded with an equivalent amount of entropy, and a true random number source is vanishingly small. The risk is that your PRNG isn't ideal (and is thus vulnerable to cryptoanalysis) or your seed doesn't have as much initial entropy as you thought.
Posted Jul 26, 2014 0:19 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (1 responses)
I suppose the question really is, how long can you recycle the same initial hardware randomness input in a PRNG before an attacker could figure something out. That's kind of like figuring out how long your private keys need to be to be resilient against attack for a particular period of time. I have no idea how the math works out on that though.
Posted Jul 26, 2014 0:35 UTC (Sat)
by dlang (guest, #313)
[Link]
In other words, in theory it's a weakness against the PRNG and a reason to not use it, but in practice, avoiding a PRNG for this reason is pure paranoia.
Posted Jul 25, 2014 22:06 UTC (Fri)
by apoelstra (subscriber, #75205)
[Link] (12 responses)
Posted Jul 25, 2014 22:54 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link] (11 responses)
However, where he relies on the line that "we can figure out how to use a single key to safely encrypt many messages"... that has been a problem for various cryptosystems in the past. If you're not careful, someone with access to enough ciphertexts may be able to infer the key used to encrypt them, particularly if they also know the corresponding plaintexts.
In any case, my main point was simply that PRNGs don't create any randomness beyond whatever may have been in their initial seed. Seed a PRNG with 32 bits and generate 1 MiB of "random" data from it, and you still only have at most 32 bits of entropy--the probability of guessing the output (knowing which PRNG was used) would only be one in 2**32, not one in 2**1048576 as would be the case for the same quantity of truly random data.
Posted Jul 25, 2014 23:43 UTC (Fri)
by dlang (guest, #313)
[Link] (6 responses)
not true, you not only would need to guess the correct 2**32 seed, you would also need to guess the correct offset into the resulting stream that the 1MiB of data was pulled from, that adds some additional bits of randomness (but still far less than the 2**1048576 if every bit was random)
Posted Jul 27, 2014 3:52 UTC (Sun)
by nybble41 (subscriber, #55106)
[Link] (5 responses)
A fair point, assuming that the PRNG has an internal state larger than the seed. One might alternatively consider that offset to be part of the seed. I was assuming that you generated the output after preparing the PRNG with at most 32 bits of entropy *in total*.
Posted Jul 27, 2014 4:02 UTC (Sun)
by dlang (guest, #313)
[Link] (4 responses)
As I understand it (vastly simplified and numbers small for examples sake)
you take 32 bits of random data, it gets mixed and seeds the PRNG, but the PRNG has it's state pool.
This state pool starts off with the 32 bits of random data, but is much larger (say 256 bits)
each time data is read from the PRNG, it calculates some random data. Some of this random data is fed to the user, the rest of the random data replaces the existing pool.
for 32 bits of random data, you can generate many TiB of output, and that output cannot be identified as not being random by any anlysis, yes, at some point it could repeat, but nobody can predict when that is, even if they have the contents of the pool
so the offset into the stream can be much larger than the randomness used to initialize the pool in the first place
If you are the only user of the PRNG, the offset into the stream is a known value to you and adds no randomness.
But if there are other users of the PRNG output, then that adds to the randomness of the bits you read from the PRNG
Posted Jul 27, 2014 23:12 UTC (Sun)
by nybble41 (subscriber, #55106)
[Link] (3 responses)
If you don't control the offset, then yes, that contributes somewhat to the amount of entropy introduced into the PRNG. For example, if there could have been up to 1 MiB read from the PRNG in one-byte increments after it was seeded with 32 random bits but before you read your data, then that introduces at most 20 additional bits of entropy. You would have to search though a 52-bit space--32 bits of seed plus 20 bits of offset--to find a match for your data and determine the PRNG's internal state with a high degree of probability.
I say "at most 20 bits" because it would be unreasonable to assume that the possible offsets are uniformly distributed from zero to 1 MiB; some sizes will be more likely than others, reducing the search space.
On the other hand, if you fully randomized the PRNG's internal state, then any additional offset past that would contribute no additional entropy. Instead of searching the larger seed + offset space, you'd just search the PRNG's state space directly. If, that is, it were at all practical to brute-force search a 256-bit space.
Posted Jul 28, 2014 0:13 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Over time as new randomness was folded in and the offset gets larger then I would have confidence that the state would be too random to predict but anything that uses the PRNG output shortly after it is initially set up seems that it could be using predictable values. This would seem to be of concern to users of randomness early in the boot process, ssh key generation being the most obvious, but there are other things which use randomness.
I would presume that the people who actually fully understand this stuff have thought about all of these things and are way ahead of a layman such as myself in mitigating these issues.
Posted Jul 28, 2014 15:36 UTC (Mon)
by apoelstra (subscriber, #75205)
[Link] (1 responses)
It's not :) unless the parent post was just giving example numbers, he meant to say "32 bytes" or 256 bits.
Posted Jul 28, 2014 22:20 UTC (Mon)
by nybble41 (subscriber, #55106)
[Link]
On the other hand, if you seed /dev/urandom with 256 bits, but all but 32 of those bits are predictable to an attacker, you might as well be using a mere 32-bit seed... some entropy-starved embedded systems may be in this situation shortly after startup.
Posted Jul 26, 2014 1:33 UTC (Sat)
by apoelstra (subscriber, #75205)
[Link] (2 responses)
In the literature this sort of thing is called a "chosen plaintext attack", and any public-key cryptosystem requires a mathematical proof demonstrating that a successful CPA attack can be harnessed to solve some "hard" computational problem, e.g. the discrete logarithm problem for an elliptic curve group.
Are these mathematical proofs worth anything? After all, they don't consider side-channel attacks or implementation bugs or compromised RNG's (except to assume them away, typically), and sometimes the proofs themselves are incorrect. This is a point of great controversy, but the fact is that as an academic discipline has moved beyond the "well, try not to let the attacker get -too- much information" kind of magical thinking that was typical for pre-1970's cryptography.
If your encryption primitive is not CPA-secure (at least CPA-secure --- systems in use today typically have stronger security properties), then its security depends, at best, on the exact way it is used. It is hard enough to build cryptosystems when your primitives are secure against these very general attacks. Without it, you are hopeless!
The security requirement for PRNGs, by the way, is that a computationally bounded adversary (i.e. one who is able to do polynomially many operations in the size of the seed) cannot distinguish the PRNG output from random with non-negligible probability. If your PRNG fails this requirement, it is not cryptographically secure and no amount of seed-guarding will change this. If it doesn't fail this, then a 256-bit seed is fine.
To contrast, the attack djb describes where malicious entropy is inserted into whatever channels exist for this, is not only possible to attackers today, but is generally applicable: it will work no matter what the PRNG algorithm!
> In any case, my main point was simply that PRNGs don't create any randomness beyond whatever may have been in their initial seed. Seed a PRNG with 32 bits and generate 1 MiB of "random" data from it, and you still only have at most 32 bits of entropy--the probability of guessing the output (knowing which PRNG was used) would only be one in 2**32, not one in 2**1048576 as would be the case for the same quantity of truly random data.
Right, but neither of those numbers can be counted to by computers in our universe in its lifetime, so the distinction is not important from a security perspective. (If you are defending against a computationally unbounded adversary, your RNG does not matter since your other cryptographic primitives are not secure anyway.) This is what djb is saying when he argues that if 256 bits is enough security for a signing key, it's enough for a PRNG.
So if there's no benefit to "increasing the entropy" in this way, and it opens up a trivial algorithm-agnostic attack to any attacker who can influence the entropy source in any way....it's a bad idea.
Posted Oct 13, 2014 17:45 UTC (Mon)
by fuhchee (guest, #40059)
[Link] (1 responses)
> Right, but neither of those numbers can be counted to by
Your cell phone can count to 4 billion in a second or two.
Posted Oct 13, 2014 18:19 UTC (Mon)
by anselm (subscriber, #2796)
[Link]
Possibly, but there isn't enough energy in the universe to count up to 2**256.
Posted Jul 26, 2014 3:43 UTC (Sat)
by wahern (subscriber, #37304)
[Link]
The argument is more nuanced than that. He's actually addressing the anxiety around hardware-based RNGs like on recent Intel chips. Those sources have privileged access to the existing RNG state in the kernel because they can access main memory directly. It's possible that they could smuggl data out of the system by carefully choosing the RNGs they generate. Then people like the NSA sniff carrier signals, such as TCP sequence numbers.
"However, where he relies on the line that 'we can figure out how to use a single key to safely encrypt many messages'... that has been a problem for various cryptosystems in the past. If you're not careful, someone with access to enough ciphertexts may be able to infer the key used to encrypt them, particularly if they also know the corresponding plaintexts."
His argument is premised on (1) CSPRNGs and (2) secure sources of entropy. We _definitely_ have #1. The problems with cryptosystems are higher up the ladder, almost always PEBKAC related.
We have multiple sources for #2, but we shouldn't trust them. But we can mix them together. However _continued_ mixing could make you more susceptible to impossible-to-detect exfiltration attacks, so you should mix them until you're satisfied, then never interact with those sources again. Sort of a "wham, bam, thank you ma'am" relationship.
The real problem is knowing when you've collected sufficient entropy. You need enough, but as DJB shows collecting too much could expose you to new forms of attack. Probably the best answer is to initially seed with hardware based solutions like Intel RdRand, then mix in low-quality sources until your satisfied that you've sufficiently closed the exfiltration gap. After that, you leave well enough alone. On networked systems we're talking a matter of seconds, or minutes at most.
Posted Jan 9, 2023 10:08 UTC (Mon)
by darwi (subscriber, #131202)
[Link]
Linux tries to do this. It initializes the entropy pool very early in the boot process... However, on some system, there just isn't much randomness around... And if you have applications that drain the pool by requesting too much randomness, you can run out, even on good systems.
Posted Jul 24, 2014 20:42 UTC (Thu)
by samroberts (subscriber, #46749)
[Link] (1 responses)
Posted Jul 25, 2014 13:19 UTC (Fri)
by justincormack (subscriber, #70439)
[Link]
Posted Jul 25, 2014 16:39 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (12 responses)
I can't recall ever seeing code that goes out of its way to work around being generally unable to open files.
Posted Jul 25, 2014 20:15 UTC (Fri)
by jimparis (guest, #38647)
[Link] (11 responses)
An attacker might exhaust file descriptors maliciously, just to get some software to pick a bad random number, which could end up leaking a private key from a privileged process. The attacker would be careful in this case to try to cause the random number seeding to fail, while allowing the program to otherwise continue correctly.
Posted Jul 25, 2014 22:13 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (10 responses)
How would exhausting file descriptors get some software to pick a bad random number? The natural result of that would be for software that uses random numbers to refuse to continue.
But regardless of whether it's a valid expectation of the attacker, it doesn't explain why LibreSSL needs to have a fallback other than "return -1" for exhausted file descriptors. No other software does.
Posted Jul 25, 2014 23:41 UTC (Fri)
by dlang (guest, #313)
[Link] (9 responses)
if the program zeros a buffer, then tries to read random data into that buffer and doesn't check the error codes properly, the result is that it continues on with zeros instead of it's random seed.
This is an advantage for the bad guy.
Yes, in theory this is handled by properly checking all error conditions
But in practice, we all know that such checks are not always done.
Also, note that shutting down the service is a DoS that is also to the advantage of the bad guy
Posted Jul 26, 2014 1:42 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (8 responses)
So that still doesn't shed any light on how the fact that file descriptors could be exhausted means LibreSSL needs a fallback method of generating random numbers. LibreSSL does check the error condition -- that's how it knows to fall back.
And yet, no other program under the sun avoids DoS attacks by working around inability to open files. In fact, the program using LibreSSL most probably uses files other than /dev/urandom, so the bad guy can kill it by exhausting file descriptors regardless of what LibreSSL does.
It looks to me like the article is simply mistaken about the relevance of file descriptor exhaustion attacks. I think the reason LibreSSL has alternatives to /dev/urandom is that /dev/urandom might just be broken or not implemented on that system.
Posted Jul 26, 2014 4:03 UTC (Sat)
by jake (editor, #205)
[Link] (7 responses)
so, this comment that was quoted in the article:
> or consider providing a new failsafe API which
(which comes from the LibreSSL source) was not enough to convince you that the LibreSSL folks (at least) are worried about file descriptor exhaustion?
> I think the reason LibreSSL has alternatives to /dev/urandom is
interesting, but it certainly isn't what they *say* ...
jake
Posted Jul 26, 2014 15:55 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (6 responses)
OK, I missed that. So the article is not mistaken. It's more like the developers were really confused, thinking it's worth adding a whole new system call to the kernel just to make a program progress a little further before succumbing to file descriptor exhaustion. Or there's some totally nonobvious attack vector I'm missing.
(I do understand that there are other, sensible, reasons to have getrandom()).
Posted Jul 26, 2014 21:18 UTC (Sat)
by dlang (guest, #313)
[Link]
well, that sort of thinking is par for the course for people who get tightly absorbed into security thinking. They start to see the small things that can fail and forget that the overall system is probably going to be down first.
Posted Jul 27, 2014 11:57 UTC (Sun)
by gioele (subscriber, #61675)
[Link] (4 responses)
Is it that hard to create a side program that uses some technique to force the exhaustion of fds during the entropy gathering (to create some weakness in a cryptographical step) and then stops, leaving the attacked programs with plenty of fds, as if nothing ever happened?
Posted Jul 27, 2014 16:11 UTC (Sun)
by giraffedata (guest, #1954)
[Link] (3 responses)
It doesn't matter because even if it's possible to create such a program, it's impossible for it to achieve its goal of creating weakness in a cryptographic step if LibreSSL refuses to proceed when the open of /dev/urandom fails.
That's what we've been talking about: the design choice of LibreSSL refusing to proceed in that case (the easy, natural, conventional thing to do) versus getting random numbers in some way that doesn't require file descriptors (which involves wishing for a new kind of system call) and proceeding.
Posted Jul 27, 2014 17:49 UTC (Sun)
by jimparis (guest, #38647)
[Link] (2 responses)
But what does "refuse to proceed" mean? Return an easily-ignored error code? Terminate the process? Sit in a busy loop? You'll get different answers based on who you ask. I generally agree with your point, but it's not as simple as you make it out to be. Making it so that the problem can never occur is just another way of fixing it.
Posted Jul 28, 2014 22:50 UTC (Mon)
by giraffedata (guest, #1954)
[Link] (1 responses)
It really doesn't matter that there are options, because at least one of them is an entirely reasonable response to a catastrophic failure such as file descriptor exhaustion - a more reasonable response than designing a new kernel interface or computing entropy some other way. As a practical matter, I think it's obvious in this case that "refuse to proceed" should just mean "return -1" when the open fails, which would ultimately cause the LibreSSL to return failure to the user instead of creating a connection. The user can ignore that failure, but there's no way he can leak private information to an eavesdropper over a connection that does not exist.
I'm really just asking why would a developer single out this one particular catastrophic failure for heroic action to avoid it? I'll bet the same code allocates memory various places and just "refuses to proceed" if the allocation fails. And at some point it creates a socket and likely just "refuses to proceed" if it fails because of file descriptor exhaustion. Maybe it even uses a temporary file somewhere, and just "refuses to proceed" if the filesystem is full.
Posted Jul 28, 2014 23:13 UTC (Mon)
by jimparis (guest, #38647)
[Link]
Posted Jul 26, 2014 2:23 UTC (Sat)
by idupree (guest, #71169)
[Link] (4 responses)
Why "It should not be used for Monte Carlo simulations or other programs/algorithms which are doing probabilistic sampling." (in the patch's man page): I'd like to see the man page say why. According to http://thread.gmane.org/gmane.linux.kernel.cryptoapi/11666 the reason is: "It will be slow, and then the graduate student will whine and complain and send a bug report. It will cause urandom to pull more heavily on entropy, and if that means that you are using some kind of hardware random generator on a laptop, such as tpm-rng, you will burn more battery, but no, it will not break. This is why the man page says SHOULD not, and not MUST not. :-)"
Posted Jul 31, 2014 5:29 UTC (Thu)
by lordsutch (guest, #53)
[Link] (3 responses)
Posted Jul 31, 2014 7:41 UTC (Thu)
by eternaleye (guest, #67051)
[Link] (2 responses)
In addition, it depletes the scarce entropy resources of the kernel by the truckload, which may cause things that _really_ need good cryptographic randomness (long-term public keys, etc) to block indefinitely on /dev/random (since while urandom doesn't block, it _depletes the same pool_ causing random to block).
[1] https://en.wikipedia.org/wiki/Well_Equidistributed_Long-p...
Posted Jul 31, 2014 16:13 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Posted Feb 11, 2016 8:59 UTC (Thu)
by akostadinov (guest, #48510)
[Link]
A good read why `random` is not good idea http://www.2uo.de/myths-about-urandom/
A system call for random numbers: getrandom()
http://blog.dustinkirkland.com/2014/02/random-seeds-in-ub...
http://www.freedesktop.org/software/systemd/man/systemd-r...
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
>> be the case for the same quantity of truly random data.
> computers in our universe in its lifetime"
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
This has been earlier reported, and a fix was applied by tglx and torvalds. Check earlier LWN articles here and here for context.
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
Is exhaustion of file descriptors really an example of what this system call is intended to deal with? When a system runs out of file descriptors or any other system resource, all Hell breaks loose and one more program failing, because it can't establish a secure connection, should be barely noticeable.
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
> I can't recall ever seeing code that goes out of its way to work around being generally unable to open files.
A system call for random numbers: getrandom()
An attacker might exhaust file descriptors maliciously, just to get some software to pick a bad random number,
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
if the program zeros a buffer, then tries to read random data into that buffer and doesn't check the error codes properly, the result is that it continues on with zeros instead of it's random seed. ...
in practice, we all know that such checks are not always done.
Also, note that shutting down the service is a DoS that is also to the advantage of the bad guy
A system call for random numbers: getrandom()
> relevance of file descriptor exhaustion attacks.
> works in a chroot or when file descriptors are exhausted.
> that /dev/urandom might just be broken or not implemented on that
> system.
A system call for random numbers: getrandom()
so, this comment that was quoted in the article:
or consider providing a new failsafe API which
works in a chroot or when file descriptors are exhausted.
(which comes from the LibreSSL source) was not enough to convince you that the LibreSSL folks (at least) are worried about file descriptor exhaustion?
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
Is it that hard to create a side program that uses some technique to
force the exhaustion of fds during the entropy gathering (to create some
weakness in a cryptographical step) and then stops, leaving the attacked
programs with plenty of fds, as if nothing ever happened?
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
But what does "refuse to proceed" mean? Return an easily-ignored error code? Terminate the process? Sit in a busy loop? You'll get different answers based on who you ask.
Making it so that the problem can never occur is just another way of fixing it.
A system call for random numbers: getrandom()
As a practical matter, I think it's obvious in this case that "refuse to proceed" should just mean "return -1" when the open fails, which would ultimately cause the LibreSSL to return failure to the user instead of creating a connection.
This has nothing to do with "creating a connection"; existing code calls RAND_bytes() all the time for all sorts of things and doesn't always check the return code.
I'm really just asking why would a developer single out this one particular catastrophic failure for heroic action to avoid it?
Because this is only a problem on Linux. Because the discussion was triggered by an article entitled LibreSSL's PRNG is Unsafe on Linux. Because, as a developer points out in the comments there, "we really want to see linux provide the getentropy() syscall, which fixes all the mentioned issues."
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()
[2] https://en.wikipedia.org/wiki/Xorshift
A system call for random numbers: getrandom()
A system call for random numbers: getrandom()