lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877c9y7dwx.ffs@tglx>
Date: Wed, 23 Oct 2024 22:38:38 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Boqun Feng <boqun.feng@...il.com>, Dirk Behme <dirk.behme@...il.com>,
 Lyude Paul <lyude@...hat.com>, rust-for-linux@...r.kernel.org, Danilo
 Krummrich <dakr@...hat.com>, airlied@...hat.com, Ingo Molnar
 <mingo@...hat.com>, will@...nel.org, Waiman Long <longman@...hat.com>,
 linux-kernel@...r.kernel.org, Miguel Ojeda <ojeda@...nel.org>, Alex Gaynor
 <alex.gaynor@...il.com>, wedsonaf@...il.com, Gary Guo <gary@...yguo.net>,
 Björn Roy Baron <bjorn3_gh@...tonmail.com>, Benno Lossin
 <benno.lossin@...ton.me>, Andreas Hindborg <a.hindborg@...sung.com>,
 aliceryhl@...gle.com, Trevor Gross <tmgross@...ch.edu>
Subject: Re: [POC 1/6] irq & spin_lock: Add counted interrupt
 disabling/enabling

On Wed, Oct 23 2024 at 21:51, Peter Zijlstra wrote:
> On Wed, Oct 23, 2024 at 09:34:27PM +0200, Thomas Gleixner wrote:
>> On Thu, Oct 17 2024 at 22:51, Boqun Feng wrote:
>> Ideally you make that part of the preemption count. Bit 24-30 are free
>> (or we can move them around as needed). That's deep enough and you get
>> the debug sanity checking of the preemption counter for free (might need
>> some extra debug for this...)
>
> Urgh, so we've already had trouble that nested spinlocks bust through
> the 0xff preempt mask (because lunacy).

Seriously? Such overflow should just panic the kernel. That's broken by
definition.

> You sure you want to be this stingy with bits?

Anything above 64 nest levels is beyond insane.

But if we want to support insanity then we make preempt count 64 bit and
be done with it. But no, I don't think that encouraging insanity is a
good thing.

> We still have a few holes in pcpu_hot iirc.

On x86. Sure.

But that's still an extra conditional while when you stick it into
preemption count it's _ONE_ conditional for both and not _TWO_

It actually makes a lot of sense even for the non rust case to avoid
local_irq_save/restore. We discussed that for years and I surely have
some half finished patch set to implement it somewhere in the poison
cabinet.

The reason why we did not go for it is that we wanted to implement a
lazy interrupt disable scheme back then, i.e. just rely on the counter
and when the interrupt comes in, disable interrupts for real and then
reinject them when the counter goes to zero. That turned out to be
horribly complex and not worth the trouble.

But this scheme is different as it only avoids nested irq_save() and
allows to use guards with the locking scheme Bogun pointed out.

It's even a win in C because you don't have to worry about lock_irq()
vs. lock_irqsave() anymore and just use lock_irq_disable() or whatever
the bike shed painting debate will decide on.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ