lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Jan 2022 16:48:17 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Andy Lutomirski <luto@...capital.net>,
        Jonathan Neuschäfer <j.neuschaefer@....net>,
        "Theodore Ts'o" <tytso@....edu>,
        LKML <linux-kernel@...r.kernel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Waiman Long <longman@...hat.com>,
        Boqun Feng <boqun.feng@...il.com>
Cc:     Andy Lutomirski <luto@...nel.org>, stable <stable@...r.kernel.org>
Subject: Re: [PATCH] random: remove batched entropy locking

Hi Andy,

On Fri, Jan 28, 2022 at 4:34 PM Jason A. Donenfeld <Jason@...c4.com> wrote:
> Andy - could you take a look at this and let me know if it's still
> correct after I've ported it out of your series and into a standalone
> thing here? I'd prefer to hold off on moving forward on this until I
> receive our green light. I'm also still a bit uncertain about your NB:
> comment regarding the acceptable race. If you could elaborate on that
> argument, it might save me a few cycles with my thinking cap on.
[...]
> +       position = READ_ONCE(batch->position);
> +       /* NB: position can change to ARRAY_SIZE(batch->entropy_u64) out
> +        * from under us -- see invalidate_batched_entropy().  If this,
> +        * happens it's okay if we still return the data in the batch. */
> +       if (unlikely(position + 1 > ARRAY_SIZE(batch->entropy_u64))) {
>                 extract_crng((u8 *)batch->entropy_u64);
> -               batch->position = 0;
> +               position = 0;

So the argument for this is something along the lines of -- if the
pool is invalidated _after_ the position value is read, in the worst
case, we read old data, and then we set position to value 1, so the
remaining reads _also_ are of invalidated data, and then eventually
this depletes and we get newly extracted data? So this means that in
some racey situations, we're lengthening the window during which
get_random_u{32,64}() returns bad randomness, right?

Couldn't that race be avoided entirely by something like the below?

Jason

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 50c50a59847e..664f1a7eb293 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -2082,14 +2082,15 @@ u64 get_random_u64(void)
  u64 ret;
  unsigned long flags;
  struct batched_entropy *batch;
- unsigned int position;
+ unsigned int position, original;
  static void *previous;

  warn_unseeded_randomness(&previous);

  local_irq_save(flags);
  batch = this_cpu_ptr(&batched_entropy_u64);
- position = READ_ONCE(batch->position);
+try_again:
+ original = position = READ_ONCE(batch->position);
  /* NB: position can change to ARRAY_SIZE(batch->entropy_u64) out
  * from under us -- see invalidate_batched_entropy().  If this
  * happens it's okay if we still return the data in the batch. */
@@ -2098,6 +2099,8 @@ u64 get_random_u64(void)
  position = 0;
  }
  ret = batch->entropy_u64[position++];
+ if (unlikely(cmpxchg(&batch->position, original, batch) != original))
+ goto try_again;
  WRITE_ONCE(batch->position, position);
  local_irq_restore(flags);
  return ret;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ