[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f7bc2566-20a7-41fb-ac59-5d6a8901d8fb@redhat.com>
Date: Tue, 7 Jan 2025 22:38:06 -0500
From: Waiman Long <llong@...hat.com>
To: Kumar Kartikeya Dwivedi <memxor@...il.com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: Barret Rhoden <brho@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>, Waiman Long <llong@...hat.com>,
Alexei Starovoitov <ast@...nel.org>, Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <martin.lau@...nel.org>,
Eduard Zingerman <eddyz87@...il.com>, "Paul E. McKenney"
<paulmck@...nel.org>, Tejun Heo <tj@...nel.org>,
Josh Don <joshdon@...gle.com>, Dohyun Kim <dohyunkim@...gle.com>,
kernel-team@...a.com
Subject: Re: [PATCH bpf-next v1 09/22] rqspinlock: Protect waiters in queue
from stalls
On 1/7/25 8:59 AM, Kumar Kartikeya Dwivedi wrote:
> Implement the wait queue cleanup algorithm for rqspinlock. There are
> three forms of waiters in the original queued spin lock algorithm. The
> first is the waiter which acquires the pending bit and spins on the lock
> word without forming a wait queue. The second is the head waiter that is
> the first waiter heading the wait queue. The third form is of all the
> non-head waiters queued behind the head, waiting to be signalled through
> their MCS node to overtake the responsibility of the head.
>
> In this commit, we are concerned with the second and third kind. First,
> we augment the waiting loop of the head of the wait queue with a
> timeout. When this timeout happens, all waiters part of the wait queue
> will abort their lock acquisition attempts. This happens in three steps.
> First, the head breaks out of its loop waiting for pending and locked
> bits to turn to 0, and non-head waiters break out of their MCS node spin
> (more on that later). Next, every waiter (head or non-head) attempts to
> check whether they are also the tail waiter, in such a case they attempt
> to zero out the tail word and allow a new queue to be built up for this
> lock. If they succeed, they have no one to signal next in the queue to
> stop spinning. Otherwise, they signal the MCS node of the next waiter to
> break out of its spin and try resetting the tail word back to 0. This
> goes on until the tail waiter is found. In case of races, the new tail
> will be responsible for performing the same task, as the old tail will
> then fail to reset the tail word and wait for its next pointer to be
> updated before it signals the new tail to do the same.
>
> Lastly, all of these waiters release the rqnode and return to the
> caller. This patch underscores the point that rqspinlock's timeout does
> not apply to each waiter individually, and cannot be relied upon as an
> upper bound. It is possible for the rqspinlock waiters to return early
> from a failed lock acquisition attempt as soon as stalls are detected.
>
> The head waiter cannot directly WRITE_ONCE the tail to zero, as it may
> race with a concurrent xchg and a non-head waiter linking its MCS node
> to the head's MCS node through 'prev->next' assignment.
>
> Reviewed-by: Barret Rhoden <brho@...gle.com>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@...il.com>
> ---
> kernel/locking/rqspinlock.c | 42 +++++++++++++++++++++++++++++---
> kernel/locking/rqspinlock.h | 48 +++++++++++++++++++++++++++++++++++++
> 2 files changed, 87 insertions(+), 3 deletions(-)
> create mode 100644 kernel/locking/rqspinlock.h
>
> diff --git a/kernel/locking/rqspinlock.c b/kernel/locking/rqspinlock.c
> index dd305573db13..f712fe4b1f38 100644
> --- a/kernel/locking/rqspinlock.c
> +++ b/kernel/locking/rqspinlock.c
> @@ -77,6 +77,8 @@ struct rqspinlock_timeout {
> u16 spin;
> };
>
> +#define RES_TIMEOUT_VAL 2
> +
> static noinline int check_timeout(struct rqspinlock_timeout *ts)
> {
> u64 time = ktime_get_mono_fast_ns();
> @@ -305,12 +307,18 @@ int __lockfunc resilient_queued_spin_lock_slowpath(struct qspinlock *lock, u32 v
> * head of the waitqueue.
> */
> if (old & _Q_TAIL_MASK) {
> + int val;
> +
> prev = decode_tail(old, qnodes);
>
> /* Link @node into the waitqueue. */
> WRITE_ONCE(prev->next, node);
>
> - arch_mcs_spin_lock_contended(&node->locked);
> + val = arch_mcs_spin_lock_contended(&node->locked);
> + if (val == RES_TIMEOUT_VAL) {
> + ret = -EDEADLK;
> + goto waitq_timeout;
> + }
>
> /*
> * While waiting for the MCS lock, the next pointer may have
> @@ -334,7 +342,35 @@ int __lockfunc resilient_queued_spin_lock_slowpath(struct qspinlock *lock, u32 v
> * sequentiality; this is because the set_locked() function below
> * does not imply a full barrier.
> */
> - val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK));
> + RES_RESET_TIMEOUT(ts);
> + val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK) ||
> + RES_CHECK_TIMEOUT(ts, ret));
This has the same wfe problem for arm64.
Cheers,
Longman
Powered by blists - more mailing lists