lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHmME9pmGAAsUf4bLABB6oCDqDxVZBcQzADiDoA8ZD2s5n_1LQ@mail.gmail.com>
Date:   Fri, 28 Jan 2022 17:36:57 +0100
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     Andy Lutomirski <luto@...capital.net>,
        Jonathan Neuschäfer <j.neuschaefer@....net>,
        "Theodore Ts'o" <tytso@....edu>,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
        Waiman Long <longman@...hat.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Andy Lutomirski <luto@...nel.org>,
        stable <stable@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] random: remove batched entropy locking

Hi Sebastian,

I wrote in my last message, "I don't think that thread needs to spill
over here, though," but clearly you disagreed, which is fine I guess.
Replies inline below:

On Fri, Jan 28, 2022 at 5:15 PM Sebastian Andrzej Siewior
<bigeasy@...utronix.de> wrote:
> > I did, and my reply is here:
> > https://lore.kernel.org/lkml/CAHmME9pzdXyD0oRYyCoVUSqqsA9h03-oR7kcNhJuPEcEMTJYgw@mail.gmail.com/
> >
> > I was hoping for a series that addresses these issues. As I mentioned
> > before, I'm not super keen on deferring that processing in a
> > conditional case and having multiple entry ways into that same
> > functionality. I don't think that's worth it, especially if your
> > actual concern is just userspace calling RNDADDTOENTCNT too often
> > (which can be safely ratelimited). I don't think that thread needs to
>
> And what do you do in ratelimiting?

If I'm understanding the RT issue correctly, the problem is that
userspace can `for (;;) iotl(...);`, and the high frequency of ioctls
then increases the average latency of interrupts to a level beyond the
requirements for RT. The idea of ratelimiting the ioctl would be so
that userspace is throttled from calling it too often, so that the
average latency isn't increased.

> As I explained, you get 20 that
> "enter" and the following are block. The first 20 are already
> problematic and you need a plan-B for those that can't enter.
> So I suggested a mutex_t around the ioctl() which would act as a rate
> limiting. You did not not follow up on that idea.

A mutex_t would be fine I think? I'd like to see what this looks like
in code, but conceptually I don't see why not.

> Please ignore Jonathan report for now. As I tried to explain: This
> lockdep report shows a serious problem on PREEMPT_RT. There is _no_ need
> to be concerned on a non-PREEMPT_RT kernel. But it should be addressed.
> If this gets merged as-is then thanks to the stable tag it will get
> backported (again no change for !RT) and will collide with PREEMPT_RT
> patch. And as I mentioned, the locking is not working on PREEMPT_RT.

Gotcha, okay, that makes sense. It sounds like Andy's patch and your
patch might both be part of the same non-stable-marked coin for
cutting down on locks in the IRQ path.

[Relatedly, I've been doing a bit of research on other ways to cut
down the amount of processing we're doing in the IRQ path, such as
<https://xn--4db.cc/K4zqXPh8/diff>. This is really not ready to go,
and I'm not ready to have a discussion on the crypto there (please,
nobody comment on the crypto there yet; I'll be really annoyed), but
the general gist is that I think it might be possible to reduce the
number of cycles spent in IRQ with some nice new tricks.]

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ