lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 9 Feb 2022 13:49:39 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Sultan Alsawaf <sultan@...neltoast.com>,
        Jonathan Neuschäfer <j.neuschaefer@....net>
Subject: Re: [PATCH v3 2/2] random: defer fast pool mixing to worker

On Wed, Feb 9, 2022 at 1:36 AM Jason A. Donenfeld <Jason@...c4.com> wrote:
> On Wed, Feb 9, 2022 at 1:12 AM Eric Biggers <ebiggers@...nel.org> wrote:
> > So, add_interrupt_randomness() can execute on the same CPU re-entrantly at any
> > time this is executing?  That could result in some pretty weird behavior, where
> > the pool gets changed half-way through being used, so what is used is neither
> > the old nor the new state of the pool.  Is there a reason why this is okay?
>
> Yes, right, that's the "idea" of this patch, if you could call it such
> a thing. The argument is that we set fast_pool->count to zero *after*
> mixing in the existing bytes + whatever partial bytes might be mixed
> in on an interrupt halfway through the execution of mix_pool_bytes.
> Since we set the count to zero after, it means we do not give any
> credit to those partial bytes for the following set of 64 interrupts.
> What winds up being mixed in will contain at least as much as what was
> there had it not been interrupted. And what gets mixed in the next
> time will only have more mixed in than it otherwise would have, not
> less.

I can actually make it even better by memcpy()ing the fast pool first.
This way any races only affect the fast_mix side -- harmless as
described above -- without affecting blake2s. The latter was _already_
buffering it, but memcpy()ing makes it more clear and doesn't rely on
that behavior. It also means that we get to set count to zero a bit
sooner.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ