[<prev] [next>] [day] [month] [year] [list]
Message-ID: <5880722-767c-16db-fc3-df50a12754b9@google.com>
Date: Mon, 26 Sep 2022 21:02:22 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Hillf Danton <hdanton@...a.com>
cc: Hugh Dickins <hughd@...gle.com>, Keith Busch <kbusch@...nel.org>,
Jan Kara <jack@...e.cz>, Jens Axboe <axboe@...nel.dk>,
Yu Kuai <yukuai1@...weicloud.com>,
Liu Song <liusong@...ux.alibaba.com>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH next] sbitmap: fix lockup while swapping
On Sat, 24 Sep 2022, Hillf Danton wrote:
>
> I think the lockup can be avoided by
> a) either advancing wake_index as early as I can [1],
> b) or doing wakeup in case of zero wait_cnt to kill all cases of waitqueue_active().
>
> Only for thoughts now.
Thanks Hillf: I gave your __sbq_wake_up() patch below several tries,
and as far as I could tell, it works just as well as my one-liner.
But I don't think it's what we would want to do: doesn't it increment
wake_index on every call to __sbq_wake_up()? whereas I thought it was
intended to be incremented only after wake_batch calls (thinking in
terms of nr 1).
I'll not be surprised if your advance-wake_index-earlier idea ends
up as a part of the solution: but mainly I agree with Jan that the
whole code needs a serious redesign (or perhaps the whole design
needs a serious recode). So I didn't give your version more thought.
Hugh
>
> Hillf
>
> [1] https://lore.kernel.org/lkml/afe5b403-4e37-80fd-643d-79e0876a7047@linux.alibaba.com/
>
> +++ b/lib/sbitmap.c
> @@ -613,6 +613,16 @@ static bool __sbq_wake_up(struct sbitmap
> if (!ws)
> return false;
>
> + do {
> + /* open code sbq_index_atomic_inc(&sbq->wake_index) to avoid race */
> + int old = atomic_read(&sbq->wake_index);
> + int new = sbq_index_inc(old);
> +
> + /* try another ws if someone else takes care of this one */
> + if (old != atomic_cmpxchg(&sbq->wake_index, old, new))
> + return true;
> + } while (0);
> +
> cur = atomic_read(&ws->wait_cnt);
> do {
> /*
> @@ -620,7 +630,7 @@ static bool __sbq_wake_up(struct sbitmap
> * function again to wakeup a new batch on a different 'ws'.
> */
> if (cur == 0)
> - return true;
> + goto out;
> sub = min(*nr, cur);
> wait_cnt = cur - sub;
> } while (!atomic_try_cmpxchg(&ws->wait_cnt, &cur, wait_cnt));
> @@ -634,6 +644,7 @@ static bool __sbq_wake_up(struct sbitmap
>
> *nr -= sub;
>
> +out:
> /*
> * When wait_cnt == 0, we have to be particularly careful as we are
> * responsible to reset wait_cnt regardless whether we've actually
> @@ -661,12 +672,6 @@ static bool __sbq_wake_up(struct sbitmap
> */
> smp_mb__before_atomic();
>
> - /*
> - * Increase wake_index before updating wait_cnt, otherwise concurrent
> - * callers can see valid wait_cnt in old waitqueue, which can cause
> - * invalid wakeup on the old waitqueue.
> - */
> - sbq_index_atomic_inc(&sbq->wake_index);
> atomic_set(&ws->wait_cnt, wake_batch);
>
> return ret || *nr;
Powered by blists - more mailing lists