lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9rNv78wKvTNEvKVoC_CcxtVJdDU0cJvxjTYans1eugZqw@mail.gmail.com>
Date:   Wed, 9 Feb 2022 01:36:42 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Eric Biggers <ebiggers@...nel.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        "Theodore Ts'o" <tytso@....edu>,
        Sultan Alsawaf <sultan@...neltoast.com>,
        Jonathan Neuschäfer <j.neuschaefer@....net>
Subject: Re: [PATCH v3 2/2] random: defer fast pool mixing to worker

Hi Eric,

On Wed, Feb 9, 2022 at 1:12 AM Eric Biggers <ebiggers@...nel.org> wrote:
> So, add_interrupt_randomness() can execute on the same CPU re-entrantly at any
> time this is executing?  That could result in some pretty weird behavior, where
> the pool gets changed half-way through being used, so what is used is neither
> the old nor the new state of the pool.  Is there a reason why this is okay?

Yes, right, that's the "idea" of this patch, if you could call it such
a thing. The argument is that we set fast_pool->count to zero *after*
mixing in the existing bytes + whatever partial bytes might be mixed
in on an interrupt halfway through the execution of mix_pool_bytes.
Since we set the count to zero after, it means we do not give any
credit to those partial bytes for the following set of 64 interrupts.
What winds up being mixed in will contain at least as much as what was
there had it not been interrupted. And what gets mixed in the next
time will only have more mixed in than it otherwise would have, not
less.

> Is there a reason why the FAST_POOL_MIX_INFLIGHT is part of 'count' instead of a
> separate boolean?

So that we can clear it with a single WRITE_ONCE, and to save space in
the per-cpu crng.

> Also, a high level question.  Now that the call to mix_pool_bytes() would no
> longer occur in hard IRQ context, how much reason is there to minimize the
> amount of data passed to it?  Would it be feasible to just concatenate the
> interrupt data into an array, and pass the whole array to mix_pool_bytes() --
> eliminating the homebrew ARX thing entirely?

Indeed I'm working on replacing fast_mix() with something we can
actually reason about. I thought about a big array but I'm not quite
convinced that the memory overhead of this would be worth it. Right
now, each interrupt generates 16 bytes of data, and we ingest that
data after 64ish interrupts -- but sometimes more (and sometimes
less). So that's a whole kilobyte in the best case. That's not /tons/
but it's not nothing either. So while the big array idea is one
possibility, it's not the only one I'm evaluating. Hopefully in the
coming weeks I'll have some ideas ready to discuss on that.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ