lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yy3NJFmdxclHTKs3@kbusch-mbp.dhcp.thefacebook.com>
Date:   Fri, 23 Sep 2022 09:13:40 -0600
From:   Keith Busch <kbusch@...nel.org>
To:     Jan Kara <jack@...e.cz>
Cc:     Hugh Dickins <hughd@...gle.com>, Jens Axboe <axboe@...nel.dk>,
        Yu Kuai <yukuai1@...weicloud.com>,
        Liu Song <liusong@...ux.alibaba.com>,
        linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH next] sbitmap: fix lockup while swapping

On Fri, Sep 23, 2022 at 04:43:03PM +0200, Jan Kara wrote:
> On Wed 21-09-22 18:40:12, Jan Kara wrote:
> > On Mon 19-09-22 16:01:39, Hugh Dickins wrote:
> > > On Mon, 19 Sep 2022, Keith Busch wrote:
> > > > On Sun, Sep 18, 2022 at 02:10:51PM -0700, Hugh Dickins wrote:
> > > > > I have almost no grasp of all the possible sbitmap races, and their
> > > > > consequences: but using the same !waitqueue_active() check as used
> > > > > elsewhere, fixes the lockup and shows no adverse consequence for me.
> > > > 
> > > >  
> > > > > Fixes: 4acb83417cad ("sbitmap: fix batched wait_cnt accounting")
> > > > > Signed-off-by: Hugh Dickins <hughd@...gle.com>
> > > > > ---
> > > > > 
> > > > >  lib/sbitmap.c |    2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > 
> > > > > --- a/lib/sbitmap.c
> > > > > +++ b/lib/sbitmap.c
> > > > > @@ -620,7 +620,7 @@ static bool __sbq_wake_up(struct sbitmap
> > > > >  		 * function again to wakeup a new batch on a different 'ws'.
> > > > >  		 */
> > > > >  		if (cur == 0)
> > > > > -			return true;
> > > > > +			return !waitqueue_active(&ws->wait);
> > > > 
> > > > If it's 0, that is supposed to mean another thread is about to make it not zero
> > > > as well as increment the wakestate index. That should be happening after patch
> > > > 48c033314f37 was included, at least.
> > > 
> > > I believe that the thread about to make wait_cnt not zero (and increment the
> > > wakestate index) is precisely this interrupted thread: the backtrace shows
> > > that it had just done its wakeups, so has not yet reached making wait_cnt
> > > not zero; and I suppose that either its wakeups did not empty the waitqueue
> > > completely, or another waiter got added as soon as it dropped the spinlock.
> 
> I was trying to wrap my head around this but I am failing to see how we
> could have wait_cnt == 0 for long enough to cause any kind of stall let
> alone a lockup in sbitmap_queue_wake_up() as you describe. I can understand
> we have:
> 
> CPU1						CPU2
> sbitmap_queue_wake_up()
>   ws = sbq_wake_ptr(sbq);
>   cur = atomic_read(&ws->wait_cnt);
>   do {
> 	...
> 	wait_cnt = cur - sub;	/* this will be 0 */
>   } while (!atomic_try_cmpxchg(&ws->wait_cnt, &cur, wait_cnt));
>   ...
> 						/* Gets the same waitqueue */
> 						ws = sbq_wake_ptr(sbq);
> 						cur = atomic_read(&ws->wait_cnt);
> 						do {
> 							if (cur == 0)
> 								return true; /* loop */
>   wake_up_nr(&ws->wait, wake_batch);
>   smp_mb__before_atomic();
>   sbq_index_atomic_inc(&sbq->wake_index);
>   atomic_set(&ws->wait_cnt, wake_batch); /* This stops looping on CPU2 */
> 
> So until CPU1 reaches the atomic_set(), CPU2 can be looping. But how come
> this takes so long that is causes a hang as you describe? Hum... So either
> CPU1 takes really long to get to atomic_set():
> - can CPU1 get preempted? Likely not at least in the context you show in
>   your message
> - can CPU1 spend so long in wake_up_nr()? Maybe the waitqueue lock is
>   contended but still...
> 
> or CPU2 somehow sees cur==0 for longer than it should. The whole sequence
> executed in a loop on CPU2 does not contain anything that would force CPU2
> to refresh its cache and get new ws->wait_cnt value so we are at the mercy
> of CPU cache coherency mechanisms to stage the write on CPU1 and propagate
> it to other CPUs. But still I would not expect that to take significantly
> long. Any other ideas?

Thank you for the analysis. I arrived at the same conclusions.

If this is a preempt enabled context, and there's just one CPU, I suppose the
2nd task could spin in the while(), blocking the 1st task from resetting the
wait_cnt. I doubt that's happening though, at least for nvme where we call this
function in irq context.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ