[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140414160211.GE711@lst.de>
Date: Mon, 14 Apr 2014 18:02:11 +0200
From: Torsten Duwe <duwe@....de>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: Andy Lutomirski <luto@...capital.net>,
Theodore Ts'o <tytso@....edu>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Matt Mackall <mpm@...enic.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Arnd Bergmann <arnd@...db.de>,
Rusty Russell <rusty@...tcorp.com.au>,
Satoru Takeuchi <satoru.takeuchi@...il.com>,
ingo.tuchscherer@...ibm.com, linux-kernel@...r.kernel.org,
Hans-Georg Markgraf <MGRF@...ibm.com>,
Gerald Schaefer <gerald.schaefer@...ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Joe Perches <joe@...ches.com>
Subject: [PATCH v3 00/03]: hwrng: an in-kernel rngd
More or less a resend of v2.
On Wed, Mar 26, 2014 at 06:03:37PM -0700, H. Peter Anvin wrote:
> I'm wondering more about the default. We default to 50% for arch_get_random_seed, and this is supposed to be the default for in effect unverified hwrngs...
Done. 50% is now the default, that's the only change from v2.
Andy: the printk you pointed out already limits itself to 1/10s,
which is half the default rate limit. Also, as Peter already
wrote, we're dealing with true HWRNGs here; if such a device
does not produce a single byte within 10 seconds something _is_
severely broken and, like a dying disk, worth to be logged.
Here's one of the better circuits I found:
http://www.maximintegrated.com/app-notes/index.mvp/id/3469
or offline:
http://pdfserv.maximintegrated.com/en/an/AN3469.pdf
Disclaimer: I'm not endorsing Maxim, it's just that paper
that hits the spot IMHO.
Anything wrong with feeding those bits into the input pool?
Any other comments on the code?
Torsten
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists