[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250819100046.ymb_o7VA@linutronix.de>
Date: Tue, 19 Aug 2025 12:00:46 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Waiman Long <llong@...hat.com>
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev, Boqun Feng <boqun.feng@...il.com>,
Clark Williams <clrkwllms@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
John Ogness <john.ogness@...utronix.de>,
Jonathan Corbet <corbet@....net>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Valentin Schneider <vschneid@...hat.com>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH v2 2/3] Documentation: locking: Add
local_lock_nested_bh() to locktypes
On 2025-08-18 14:06:39 [-0400], Waiman Long wrote:
> > index 80c914f6eae7a..37b6a5670c2fa 100644
> > --- a/Documentation/locking/locktypes.rst
> > +++ b/Documentation/locking/locktypes.rst
> > @@ -204,6 +204,27 @@ per-CPU data structures on a non PREEMPT_RT kernel.
> > local_lock is not suitable to protect against preemption or interrupts on a
> > PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
> > +CPU local scope and bottom-half
> > +-------------------------------
> > +
> > +Per-CPU variables that are accessed only in softirq context should not rely on
> > +the assumption that this context is implicitly protected due to being
> > +non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and
> > +synchronizing every bottom-half-disabled section via implicit context results
> > +in an implicit per-CPU "big kernel lock."
> > +
> > +A local_lock_t together with local_lock_nested_bh() and
> > +local_unlock_nested_bh() for locking operations help to identify the locking
> > +scope.
> > +
> > +When lockdep is enabled, these functions verify that data structure access
> > +occurs within softirq context.
> > +Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
> > +does not add overhead when used without lockdep.
>
> Should it be local_lock_nested_bh()? It doesn't make sense to compare
> local_unlock_nested_bh() against local_lock(). In a PREEMPT_RT kernel,
> local_lock() disables migration but not preemption.
Yes, it should have been the lock and not the unlock part. I mention
just preemption part here because it focuses on the !RT part compared to
local_lock() and that it adds no overhead.
The PREEMPT_RT part below mentions that it behaves as a real lock so
that should be enough (not to mention the migration part (technically
migration must be already disabled so we could omit disabling migration
here but it is just a counter increment/ decrement at this point so we
don't win much by doing so)).
I made the following:
@@ -219,11 +219,11 @@ scope.
When lockdep is enabled, these functions verify that data structure access
occurs within softirq context.
-Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
+Unlike local_lock(), local_lock_nested_bh() does not disable preemption and
does not add overhead when used without lockdep.
On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
-local_unlock_nested_bh() serializes access to the data structure, which allows
+local_lock_nested_bh() serializes access to the data structure, which allows
removal of serialization via local_bh_disable().
raw_spinlock_t and spinlock_t
Good?
> Cheers,
> Longman
>
> > +
> > +On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
> > +local_unlock_nested_bh() serializes access to the data structure, which allows
> > +removal of serialization via local_bh_disable().
> > raw_spinlock_t and spinlock_t
> > =============================
Sebastian
Powered by blists - more mailing lists