[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140720162622.29664.qmail@ns.horizon.com>
Date: 20 Jul 2014 12:26:22 -0400
From: "George Spelvin" <linux@...izon.com>
To: tytso@....edu
Cc: linux@...izon.com, linux-crypto@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH, RFC] random: introduce getrandom(2) system call
One basic question... why limit this to /dev/random?
If we're trying to avoid fd exhaustion attacks, wouldn't an "atomically
read a file into a buffer" system call (that could be used on
/dev/urandom, or /etc/hostname, or /proc/foo, or...) be more useful?
E.g.
ssize_t readat(int dirfd, char const *path, struct stat *st,
char *buf, size_t len, int flags);
It's basically equivalent to openat(), optional fstat() (if st is non-NULL),
read(), close(), but it doesn't allocate an fd number.
Is it necessary to have a system call just for entropy?
If you want a "urandom that blocks until seeded", you can always create
another device node for the purpose.
> The main argument I can see for putting in a limit is to encourage the
> "proper" use of the interface. In practice, anything larger than 128
> probably means the interface is getting misused, either due to a bug
> or some other kind of oversight.
Agreed. Even 1024 bits is excessive. 32 bytes is the "real" maximum
that people should be asking for with current primitives, so an interface
limitation to 64 is quite defensible. (But 128 isn't *wildly* excessive.)
If you do stick with a random-specific call, specifying the entropy
in bits (with some specified convention for the last fractional byte)
is anothet interesting idea. Perhaps too prone to bugs, though.
(People thinking it's bytes and producing low-entropy keys.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists