[<prev] [next>] [day] [month] [year] [list]
Message-ID: <688b3c92-e9aa-f506-a288-646c5477f6df@I-love.SAKURA.ne.jp>
Date: Thu, 31 Mar 2022 08:43:32 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: "Perepechko, Andrew" <andrew.perepechko@....com>,
Dominique Martinet <asmadeus@...ewreck.org>
Cc: Andreas Dilger <adilger@...ger.ca>,
"Theodore Ts'o" <tytso@....edu>,
syzbot <syzbot+bde0f89deacca7c765b8@...kaller.appspotmail.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"syzkaller-bugs@...glegroups.com" <syzkaller-bugs@...glegroups.com>,
"v9fs-developer@...ts.sourceforge.net"
<v9fs-developer@...ts.sourceforge.net>,
"open list:EXT4 FILE SYSTEM" <linux-ext4@...r.kernel.org>,
Tejun Heo <tj@...nel.org>
Subject: Re: [syzbot] possible deadlock in p9_write_work
Hello.
Since "ext4: truncate during setxattr leads to kernel panic" did not choose
per-superblock WQ, ext4_put_super() for some ext4 superblock currently waits
for completion of iput() from delayed_iput_fn() from delayed_iput() from
ext4_xattr_set_entry() from all ext4 superblocks (in addition to other tasks
scheduled by unrelated subsystems).
If ext4_put_super() for some superblock wants to wait for only works from that
superblock, please use per-superblock WQ. Creating per-superblock WQ via
alloc_workqueue() without WQ_MEM_RECLAIM flag will not consume much resource.
If ext4_put_super() for some superblock can afford waiting for iput() from
other ext4 superblocks, you can use per-filesystem WQ.
On 2022/03/31 1:56, Perepechko, Andrew wrote:
> Hello Tetsuo!
>
> Thank you for your report.
>
> I wonder if I can fix this issue by creating a separate per-superblock workqueue.
>
> I may not fully understand the lockdep magic in process_one_work() so any advice is appreciated.
>
> As I see it, if there's no shared locking between different workqueues, unmount should be able to flush only its own scheduled tasks (which should not conflict with any p9 tasks) and unblock the locking chain under similar conditions.
>
> Thank you,
> Andrew
> ________________________________
> From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> Sent: 30 March 2022 05:49
> To: Dominique Martinet <asmadeus@...ewreck.org>
> Cc: Perepechko, Andrew <andrew.perepechko@....com>; Andreas Dilger <adilger@...ger.ca>; Theodore Ts'o <tytso@....edu>; syzbot <syzbot+bde0f89deacca7c765b8@...kaller.appspotmail.com>; linux-kernel@...r.kernel.org <linux-kernel@...r.kernel.org>; syzkaller-bugs@...glegroups.com <syzkaller-bugs@...glegroups.com>; v9fs-developer@...ts.sourceforge.net <v9fs-developer@...ts.sourceforge.net>; open list:EXT4 FILE SYSTEM <linux-ext4@...r.kernel.org>
> Subject: Re: [syzbot] possible deadlock in p9_write_work
>
> On 2022/03/30 11:29, Dominique Martinet wrote:
>> Tetsuo Handa wrote on Wed, Mar 30, 2022 at 10:57:15AM +0900:
>>>>> Please don't use schedule_work() if you need to use flush_scheduled_work().
>>>>
>>>> In this case we don't call flush_scheduled_work -- ext4 does.
>>>
>>> Yes, that's why I changed recipients to ext4 people.
>>
>> Sorry, I hadn't noticed.
>> 9p is the one calling schedule_work, so ultimately it really is the
>> combinaison of the two, and not just ext4 that's wrong here.
>
> Calling schedule_work() itself does not cause troubles (unless there are
> too many pending works to make progress). Calling flush_scheduled_work()
> causes troubles because it waits for completion of all works even if
> some of works cannot be completed due to locks held by the caller of
> flush_scheduled_work(). 9p is innocent for this report.
>
>
Powered by blists - more mailing lists