[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxnVmCqk2PzsOj2h@Boquns-Mac-mini.local>
Date: Wed, 23 Oct 2024 22:05:28 -0700
From: Boqun Feng <boqun.feng@...il.com>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Dirk Behme <dirk.behme@...il.com>, Lyude Paul <lyude@...hat.com>,
rust-for-linux@...r.kernel.org, Danilo Krummrich <dakr@...hat.com>,
airlied@...hat.com, Ingo Molnar <mingo@...hat.com>, will@...nel.org,
Waiman Long <longman@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, linux-kernel@...r.kernel.org,
Miguel Ojeda <ojeda@...nel.org>,
Alex Gaynor <alex.gaynor@...il.com>, wedsonaf@...il.com,
Gary Guo <gary@...yguo.net>,
Björn Roy Baron <bjorn3_gh@...tonmail.com>,
Benno Lossin <benno.lossin@...ton.me>,
Andreas Hindborg <a.hindborg@...sung.com>, aliceryhl@...gle.com,
Trevor Gross <tmgross@...ch.edu>
Subject: Re: [POC 1/6] irq & spin_lock: Add counted interrupt
disabling/enabling
On Wed, Oct 23, 2024 at 09:34:27PM +0200, Thomas Gleixner wrote:
> On Thu, Oct 17 2024 at 22:51, Boqun Feng wrote:
> > Currently the nested interrupt disabling and enabling is present by
> > Also add the corresponding spin_lock primitives: spin_lock_irq_disable()
> > and spin_unlock_irq_enable(), as a result, code as follow:
> >
> > spin_lock_irq_disable(l1);
> > spin_lock_irq_disable(l2);
> > spin_unlock_irq_enable(l1);
> > // Interrupts are still disabled.
> > spin_unlock_irq_enable(l2);
> >
> > doesn't have the issue that interrupts are accidentally enabled.
> >
> > This also makes the wrapper of interrupt-disabling locks on Rust easier
> > to design.
>
> Clever!
>
Thanks! ;-)
> > +DECLARE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_state);
> > +
> > +static inline void local_interrupt_disable(void)
> > +{
> > + unsigned long flags;
> > + long new_count;
> > +
> > + local_irq_save(flags);
> > +
> > + new_count = raw_cpu_inc_return(local_interrupt_disable_state.count);
>
> Ideally you make that part of the preemption count. Bit 24-30 are free
> (or we can move them around as needed). That's deep enough and you get
> the debug sanity checking of the preemption counter for free (might need
> some extra debug for this...)
>
> So then this becomes:
>
> local_interrupt_disable()
> {
> cnt = preempt_count_add_return(LOCALIRQ_OFFSET);
> if ((cnt & LOCALIRQ_MASK) == LOCALIRQ_OFFSET) {
> local_irq_save(flags);
> this_cpu_write(..., flags);
> }
> }
>
> and
>
> local_interrupt_enable()
> {
> if ((preempt_count() & LOCALIRQ_MASK) == LOCALIRQ_OFFSET) {
> local_irq_restore(this_cpu_read(...flags);
> preempt_count_sub_test_resched(LOCALIRQ_OFFSET);
> } else {
> // Does not need a resched test because it's not going
> // to 0
> preempt_count_sub(LOCALIRQ_OFFSET);
> }
> }
>
Yes, this looks nice, one tiny problem is that it requires
PREEMPT_COUNT=y ;-) Maybe we can do: if PREEMPT_COUNT=y, we use preempt
count, otherwise use a percpu?
Hmm... but this will essentially be: we have a irq_disable_count() which
is always built-in, and we also uses it as preempt count if
PREEMPT_COUNT=y. This doesn't look too bad to me.
> and then the lock thing becomes
>
> spin_lock_irq_disable()
> {
> local_interrupt_disable();
> lock();
> }
>
> spin_unlock_irq_enable()
> {
> unlock();
> local_interrupt_enable();
> }
>
> instead having to do:
>
> spin_unlock_irq_enable()
> {
> unlock();
> local_interrupt_enable();
> preempt_enable();
> }
>
> Which needs two distinct checks, one for the interrupt and one for the
No? Because now since we fold the interrupt disable count into preempt
count, so we don't need to care about preempt count any more if we we
local_interrupt_{disable,enable}(). For example, in the above
local_interrupt_enable(), interrupts are checked at local_irq_restore()
and preemption is checked at preempt_count_sub_test_resched(). Right?
Regards,
Boqun
> preemption counter. Hmm?
>
> Thanks,
>
> tglx
Powered by blists - more mailing lists