lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmJ4bJjet/QhkXZS@zx2c4.com>
Date:   Fri, 22 Apr 2022 11:42:04 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     Theodore Ts'o <tytso@....edu>, linux-kernel@...r.kernel.org,
        linux-crypto@...r.kernel.org,
        Linus Torvalds <torvalds@...uxfoundation.org>
Subject: Re: [PATCH v2] random: avoid mis-detecting a slow counter as a cycle
 counter

Hi Eric,

On Thu, Apr 21, 2022 at 05:34:58PM -0700, Eric Biggers wrote:
> On Fri, Apr 22, 2022 at 01:40:25AM +0200, Jason A. Donenfeld wrote:
> > Hi Eric,
> > 
> > Thanks. This looks better.
> > 
> > On Thu, Apr 21, 2022 at 04:31:52PM -0700, Eric Biggers wrote:
> > > Therefore, increase the number of counter comparisons from 1 to 3, to
> > > greatly reduce the rate of false positive cycle counter detections.
> > > +	for (i = 0; i < 3; i++) {
> > > +		unsigned long entropy = random_get_entropy();
> >  
> > Wondering: why do you do 3 comparisons rather than 2? What does 3 get
> > you that 2 doesn't already? I thought the only real requirement was that
> > in the event where (a)!=(b), (b) is read as meaningfully close as
> > possible to when the counter changes.
> > 
> 
> On CONFIG_PREEMPT kernels this code usually runs with preemption enabled, so I
> don't think it's guaranteed that any particular number of comparisons will be
> sufficient, since the task could get preempted for a long time between each call
> to random_get_entropy().  However, the chance of a false positive should
> decrease exponentially, and should be pretty small in the first place, so 3
> comparisons seems like a good number.

Ahh, I see. So you check three times instead of disabling
preemption/irqs, which would be awfully heavy weight. Seems like a
reasonable compromise.

By the way, I was thinking about the assumptions we're making with this
comparison ("two adjacent counters shouldn't be the same") in the
context of this idea from my first reply to you:

    static void entropy_timer(struct timer_list *t)
    {
        struct timer_state *s = container_of(...t...);
        if (++s->samples == s->samples_per_bit) {
            credit_entropy_bits(1);
            s->samples = 0;
        }
    }

A naive approach that strikes me as strictly _no worse_ than what we
currently have would be to say that right now we require every counter
to be different in order to credit everytime. If every other counter is
different, then we should credit every other time. If every third
counter is different, we should credit every third time. And so forth.
While that simple logic isn't some sort of fancy realtime FFT thing, it
also doesn't appear on its surface to be relying on assumptions that
we're not already making. I think? It has flaws -- it doesn't account
for the possibility that while the counter changes, it's way too uniform
in how it changes -- but neither does the current technique. So while
it's not the end goal of actually looking at this through some
statistical lens, it feels like an improvement on what we have now with
little complication.

If that seems convincing, what do you make of the below snippet?

Jason

------------8<--------------------------------------------------------------

diff --git a/drivers/char/random.c b/drivers/char/random.c
index bf89c6f27a19..cabba031cbaf 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1354,6 +1354,12 @@ void add_interrupt_randomness(int irq)
 }
 EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
+struct entropy_timer_state {
+	unsigned long entropy;
+	struct timer_list timer;
+	unsigned int samples, samples_per_bit;
+};
+
 /*
  * Each time the timer fires, we expect that we got an unpredictable
  * jump in the cycle counter. Even if the timer is running on another
@@ -1367,9 +1373,14 @@ EXPORT_SYMBOL_GPL(add_interrupt_randomness);
  *
  * So the re-arming always happens in the entropy loop itself.
  */
-static void entropy_timer(struct timer_list *t)
+static void entropy_timer(struct timer_list *timer)
 {
-	credit_entropy_bits(1);
+	struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
+
+	if (++state->samples == state->samples_per_bit) {
+		credit_entropy_bits(1);
+		state->samples = 0;
+	}
 }
 
 /*
@@ -1378,16 +1389,26 @@ static void entropy_timer(struct timer_list *t)
  */
 static void try_to_generate_entropy(void)
 {
-	struct {
-		unsigned long entropy;
-		struct timer_list timer;
-	} stack;
+	enum { NUM_TRIALS = 2048, MAX_BITS_PER_SAMPLE = 256 };
+	struct entropy_timer_state stack;
+	unsigned int i, num_different = 1;
 
-	stack.entropy = random_get_entropy();
-
-	/* Slow counter - or none. Don't even bother */
-	if (stack.entropy == random_get_entropy())
+	unsigned long *trials = kmalloc_array(NUM_TRIALS, sizeof(*trials), GFP_KERNEL);
+	if (!trials)
 		return;
+	for (i = 0; i < NUM_TRIALS; ++i)
+		trials[i] = random_get_entropy();
+	for (i = 0; i < NUM_TRIALS - 1; ++i) {
+		if (trials[i] != trials[i + 1])
+			++num_different;
+	}
+	mix_pool_bytes(trials, NUM_TRIALS * sizeof(*trials));
+	kfree(trials);
+	stack.samples_per_bit = DIV_ROUND_UP(NUM_TRIALS, num_different);
+	if (stack.samples > MAX_BITS_PER_SAMPLE)
+		return;
+	stack.samples = 0;
+	stack.entropy = random_get_entropy();
 
 	timer_setup_on_stack(&stack.timer, entropy_timer, 0);
 	while (!crng_ready() && !signal_pending(current)) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ