lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Feb 2022 17:58:57 +0100
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Theodore Ts'o <tytso@....edu>,
        Sultan Alsawaf <sultan@...neltoast.com>,
        Jonathan Neuschäfer <j.neuschaefer@....net>,
        Dominik Brodowski <linux@...inikbrodowski.net>
Subject: Re: [PATCH v6] random: defer fast pool mixing to worker

On 2022-02-11 17:50:34 [+0100], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,

> > I *think* we could drop that "fast_pool !=
> > this_cpu_ptr(&irq_randomness)" check at the top since that cmpxchg will
> > save us and redo the loop. But if I remember correctly you worried about
> > fast_pool->pool being modified (which is only a corner case if we are on
> > the other CPU while the orig CPU is back again). Either way, it would be
> > random and we would not consume more entropy.
> 
> No, we cannot, and "it's all random anyway so who cares if we corrupt
> things!" is not rigorous, as entropy may actually be thrown away as
> it's moved between words on each mix. If we're not running on the same
> CPU, one CPU can corrupt the other's view of fast pool before updating
> count. We must keep this.

Okay, I assumed something like that.

> > So if we have to keep this then please swap that migrate_disable() with
> > local_irq_disable(). Otherwise PeterZ will yell at me.
> 
> Okay, I'll do that then, and then in the process get rid of the
> cmpxchg loop since it's no longer required.

So the only reason why we have that atomic_t is for rare case where run
on the remote CPU and need to remove the upper bit in the counter?

> > >       if (unlikely(crng_init == 0)) {
> > > -             if (fast_pool->count >= 64 &&
> > > +             if (new_count >= 64 &&
> > >                   crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) {
> > > -                     fast_pool->count = 0;
> > > +                     atomic_set(&fast_pool->count, 0);
> > >                       fast_pool->last = now;
> >
> > I'm fine if we keep this as is for now.
> > What do we do here vs RT? I suggested this
> >   https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/commit/?id=a2d2d54409481aa23a3e11ab9559a843e36a79ec
> >
> > Is this doable?
> 
> It might be, but last time I checked it seemed problematic. As I
> mentioned in an earlier thread, I'll take a look again at that next
> week after this patch here settles. Haven't forgotten.

Ah, cheers.

> v+1 coming up with irqs disabled.
> 
> Jason

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ