[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YyjdiKC0YYUkI+AI@kbusch-mbp>
Date: Mon, 19 Sep 2022 15:22:16 -0600
From: Keith Busch <kbusch@...nel.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: Jens Axboe <axboe@...nel.dk>, Yu Kuai <yukuai1@...weicloud.com>,
Jan Kara <jack@...e.cz>, Liu Song <liusong@...ux.alibaba.com>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH next] sbitmap: fix lockup while swapping
On Sun, Sep 18, 2022 at 02:10:51PM -0700, Hugh Dickins wrote:
> I have almost no grasp of all the possible sbitmap races, and their
> consequences: but using the same !waitqueue_active() check as used
> elsewhere, fixes the lockup and shows no adverse consequence for me.
> Fixes: 4acb83417cad ("sbitmap: fix batched wait_cnt accounting")
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> ---
>
> lib/sbitmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -620,7 +620,7 @@ static bool __sbq_wake_up(struct sbitmap
> * function again to wakeup a new batch on a different 'ws'.
> */
> if (cur == 0)
> - return true;
> + return !waitqueue_active(&ws->wait);
If it's 0, that is supposed to mean another thread is about to make it not zero
as well as increment the wakestate index. That should be happening after patch
48c033314f37 was included, at least.
Prior to 4acb83417cad, the code would also return 'true' if the count was
already zero, and this batched code wasn't supposed to behave any different in
that condition.
Anyway, I don't think the waitqueue_active criteria of the current waitstate is
correct either. The next waitstate may have waiters too, so we still may need
to account for this batch's count in order to wake them.
Powered by blists - more mailing lists