[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <593abb4c-12d6-4d61-a41e-f258cb8f22c6@redhat.com>
Date: Tue, 7 Jan 2025 21:19:31 -0500
From: Waiman Long <llong@...hat.com>
To: Kumar Kartikeya Dwivedi <memxor@...il.com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Barret Rhoden <brho@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>, Waiman Long <llong@...hat.com>,
Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <martin.lau@...nel.org>,
Eduard Zingerman <eddyz87@...il.com>, "Paul E. McKenney"
<paulmck@...nel.org>, Tejun Heo <tj@...nel.org>,
Josh Don <joshdon@...gle.com>, Dohyun Kim <dohyunkim@...gle.com>,
kernel-team@...a.com
Subject: Re: [PATCH bpf-next v1 08/22] rqspinlock: Protect pending bit owners
from stalls
On 1/7/25 8:59 AM, Kumar Kartikeya Dwivedi wrote:
> The pending bit is used to avoid queueing in case the lock is
> uncontended, and has demonstrated benefits for the 2 contender scenario,
> esp. on x86. In case the pending bit is acquired and we wait for the
> locked bit to disappear, we may get stuck due to the lock owner not
> making progress. Hence, this waiting loop must be protected with a
> timeout check.
>
> To perform a graceful recovery once we decide to abort our lock
> acquisition attempt in this case, we must unset the pending bit since we
> own it. All waiters undoing their changes and exiting gracefully allows
> the lock word to be restored to the unlocked state once all participants
> (owner, waiters) have been recovered, and the lock remains usable.
> Hence, set the pending bit back to zero before returning to the caller.
>
> Introduce a lockevent (rqspinlock_lock_timeout) to capture timeout
> event statistics.
>
> Reviewed-by: Barret Rhoden <brho@...gle.com>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@...il.com>
> ---
> include/asm-generic/rqspinlock.h | 2 +-
> kernel/locking/lock_events_list.h | 5 +++++
> kernel/locking/rqspinlock.c | 28 +++++++++++++++++++++++-----
> 3 files changed, 29 insertions(+), 6 deletions(-)
>
> diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
> index 8ed266f4e70b..5c996a82e75f 100644
> --- a/include/asm-generic/rqspinlock.h
> +++ b/include/asm-generic/rqspinlock.h
> @@ -19,6 +19,6 @@ struct qspinlock;
> */
> #define RES_DEF_TIMEOUT (NSEC_PER_SEC / 2)
>
> -extern void resilient_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val, u64 timeout);
> +extern int resilient_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val, u64 timeout);
>
> #endif /* __ASM_GENERIC_RQSPINLOCK_H */
> diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
> index 97fb6f3f840a..c5286249994d 100644
> --- a/kernel/locking/lock_events_list.h
> +++ b/kernel/locking/lock_events_list.h
> @@ -49,6 +49,11 @@ LOCK_EVENT(lock_use_node4) /* # of locking ops that use 4th percpu node */
> LOCK_EVENT(lock_no_node) /* # of locking ops w/o using percpu node */
> #endif /* CONFIG_QUEUED_SPINLOCKS */
>
> +/*
> + * Locking events for Resilient Queued Spin Lock
> + */
> +LOCK_EVENT(rqspinlock_lock_timeout) /* # of locking ops that timeout */
> +
> /*
> * Locking events for rwsem
> */
Since the build of rqspinlock.c is conditional on
CONFIG_QUEUED_SPINLOCKS, this lock event should be inside the
CONFIG_QUEUED_SPINLOCKS block.
Cheers,
Longman
Powered by blists - more mailing lists