[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180611172053.GR1351649@devbig577.frc2.facebook.com>
Date: Mon, 11 Jun 2018 10:20:53 -0700
From: Tejun Heo <tj@...nel.org>
To: Jan Kara <jack@...e.cz>
Cc: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Dmitry Vyukov <dvyukov@...gle.com>,
Jens Axboe <axboe@...nel.dk>,
syzbot <syzbot+4a7438e774b21ddd8eca@...kaller.appspotmail.com>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Al Viro <viro@...iv.linux.org.uk>,
Dave Chinner <david@...morbit.com>,
linux-block@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] bdi: Fix another oops in wb_workfn()
Hello,
On Mon, Jun 11, 2018 at 06:29:20PM +0200, Jan Kara wrote:
> > Would something like the following work or am I missing the point
> > entirely?
>
> I was pondering the same solution for a while but I think it won't work.
> The problem is that e.g. wb_memcg_offline() could have already removed
> wb from the radix tree but it is still pending in bdi->wb_list
> (wb_shutdown() has not run yet) and so we'd drop reference we didn't get.
Yeah, right, so the root cause is that we're walking the wb_list while
holding lock and expecting the object to stay there even after lock is
released. Hmm... we can use a mutex to synchronize the two
destruction paths. It's not like they're hot paths anyway.
Thanks.
--
tejun
Powered by blists - more mailing lists