[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84d7adee-fa83-4a8b-8476-820212dc929e@suse.cz>
Date: Tue, 1 Apr 2025 16:18:02 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: bpf@...r.kernel.org, daniel@...earbox.net, andrii@...nel.org,
martin.lau@...nel.org, akpm@...ux-foundation.org, peterz@...radead.org,
bigeasy@...utronix.de, rostedt@...dmis.org, shakeel.butt@...ux.dev,
mhocko@...e.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] locking/local_lock, mm: Replace localtry_ helpers with
local_trylock_t type
On 4/1/25 02:51, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@...nel.org>
>
> Partially revert commit 0aaddfb06882 ("locking/local_lock: Introduce localtry_lock_t").
> Remove localtry_*() helpers, since localtry_lock() name might
> be misinterpreted as "try lock".
>
> Introduce local_trylock_irqsave() helper that only works
Introduce local_trylock[_irqsave]() helpers that only work
?
> with newly introduced local_trylock_t type.
> Note that attempt to use local_trylock_irqsave() with local_lock_t
> will cause compilation failure.
>
> Usage and behavior in !PREEMPT_RT:
>
> local_lock_t lock; // sizeof(lock) == 0
local_lock(&lock, ...); // preempt disable
> local_lock_irqsave(&lock, ...); // irq save
> if (local_trylock_irqsave(&lock, ...)) // compilation error
>
> local_trylock_t lock; // sizeof(lock) == 4
ditto
> local_lock_irqsave(&lock, ...); // irq save and acquired = 1
> if (local_trylock_irqsave(&lock, ...)) // if (!acquired) irq save
>
> The existing local_lock_*() macros can be used either with
> local_lock_t or local_trylock_t.
> With local_trylock_t they set acquired = 1 while local_unlock_*() clears it.
>
> In !PREEMPT_RT local_lock_irqsave(local_lock_t *) disables interrupts
> to protect critical section, but it doesn't prevent NMI, so the fully
> reentrant code cannot use local_lock_irqsave(local_lock_t *) for
> exclusive access.
>
> The local_lock_irqsave(local_trylock_t *) helper disables interrupts
> and sets acquired=1, so local_trylock_irqsave(local_trylock_t *) from
> NMI attempting to acquire the same lock will return false.
>
> In PREEMPT_RT local_lock_irqsave() maps to preemptible spin_lock().
> Map local_trylock_irqsave() to preemptible spin_trylock().
> When in hard IRQ or NMI return false right away, since
> spin_trylock() is not safe due to explicit locking in the underneath
> rt_spin_trylock() implementation. Removing this explicit locking and
> attempting only "trylock" is undesired due to PI implications.
And something like:
The local_trylock() without _irqsave can be used to avoid the cost of
disabling/enabling interrupts by only disabling preemption, so
local_trylock() in an interrupt attempting to acquire the same
lock will return false.
> Note there is no need to use local_inc for acquired variable,
> since it's a percpu variable with strict nesting scopes.
>
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Is there a chance this refactoring will make it to -rc1? It would make
basing the further usage of the lock in mm and slab trees much easier.
But squash in the following fixups please:
----8<----
>From bc9098ebb58a2958010428c9294547934852ffa2 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@...e.cz>
Date: Tue, 1 Apr 2025 15:25:21 +0200
Subject: [PATCH] fixup! locking/local_lock, mm: Replace localtry_ helpers with
local_trylock_t type
---
include/linux/local_lock.h | 5 ++---
include/linux/local_lock_internal.h | 21 +++++++++++++++++++--
2 files changed, 21 insertions(+), 5 deletions(-)
diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h
index 9262109cca51..7ac9385cd475 100644
--- a/include/linux/local_lock.h
+++ b/include/linux/local_lock.h
@@ -52,15 +52,14 @@
__local_unlock_irqrestore(lock, flags)
/**
- * local_trylock_irqsave - Try to acquire a per CPU local lock
+ * local_trylock - Try to acquire a per CPU local lock
* @lock: The lock variable
- * @flags: Storage for interrupt flags
*
* The function can be used in any context such as NMI or HARDIRQ. Due to
* locking constrains it will _always_ fail to acquire the lock in NMI or
* HARDIRQ context on PREEMPT_RT.
*/
-#define local_trylock(lock, flags) __local_trylock(lock, flags)
+#define local_trylock(lock) __local_trylock(lock)
/**
* local_trylock_irqsave - Try to acquire a per CPU local lock, save and disable
diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h
index cc79854206df..5634383c8e9e 100644
--- a/include/linux/local_lock_internal.h
+++ b/include/linux/local_lock_internal.h
@@ -23,7 +23,7 @@ typedef struct {
#endif
/*
* Same layout as local_lock_t with 'acquired' field at the end.
- * (local_trylock_t *) will be casted to (local_lock_t *).
+ * (local_trylock_t *) will be cast to (local_lock_t *).
*/
int acquired;
} local_trylock_t;
@@ -80,7 +80,7 @@ do { \
lockdep_init_map_type(&(lock)->dep_map, #lock, &__key, \
0, LD_WAIT_CONFIG, LD_WAIT_INV, \
LD_LOCK_PERCPU); \
- local_lock_debug_init(lock); \
+ local_lock_debug_init((local_lock_t *)lock); \
} while (0)
#define __spinlock_nested_bh_init(lock) \
@@ -128,6 +128,23 @@ do { \
__local_lock_acquire(lock); \
} while (0)
+#define __local_trylock(lock) \
+ ({ \
+ local_trylock_t *tl; \
+ \
+ preempt_disable(); \
+ tl = this_cpu_ptr(lock); \
+ if (READ_ONCE(tl->acquired) == 1) { \
+ preempt_enable(); \
+ tl = NULL; \
+ } else { \
+ WRITE_ONCE(tl->acquired, 1); \
+ local_trylock_acquire( \
+ (local_lock_t *)tl); \
+ } \
+ !!tl; \
+ })
+
#define __local_trylock_irqsave(lock, flags) \
({ \
local_trylock_t *tl; \
--
2.49.0
Powered by blists - more mailing lists