[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7fd48db1-949a-46d9-ad73-a3cf5c95796e@huawei.com>
Date: Thu, 29 May 2025 18:34:32 +0800
From: "wangjianjian (C)" <wangjianjian3@...wei.com>
To: Yangtao Li <frank.li@...o.com>, <slava@...eyko.com>,
<glaubitz@...sik.fu-berlin.de>, Andrew Morton <akpm@...ux-foundation.org>,
Ernesto A. Fernández <ernesto.mnd.fernandez@...il.com>
CC: <linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<syzbot+8c0bc9f818702ff75b76@...kaller.appspotmail.com>
Subject: Re: [PATCH v2] hfsplus: remove mutex_lock check in
hfsplus_free_extents
On 2025/5/29 14:18, Yangtao Li wrote:
> Syzbot reported an issue in hfsplus filesystem:
>
> ------------[ cut here ]------------
> WARNING: CPU: 0 PID: 4400 at fs/hfsplus/extents.c:346
> hfsplus_free_extents+0x700/0xad0
> Call Trace:
> <TASK>
> hfsplus_file_truncate+0x768/0xbb0 fs/hfsplus/extents.c:606
> hfsplus_write_begin+0xc2/0xd0 fs/hfsplus/inode.c:56
> cont_expand_zero fs/buffer.c:2383 [inline]
> cont_write_begin+0x2cf/0x860 fs/buffer.c:2446
> hfsplus_write_begin+0x86/0xd0 fs/hfsplus/inode.c:52
> generic_cont_expand_simple+0x151/0x250 fs/buffer.c:2347
> hfsplus_setattr+0x168/0x280 fs/hfsplus/inode.c:263
> notify_change+0xe38/0x10f0 fs/attr.c:420
> do_truncate+0x1fb/0x2e0 fs/open.c:65
> do_sys_ftruncate+0x2eb/0x380 fs/open.c:193
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> To avoid deadlock, Commit 31651c607151 ("hfsplus: avoid deadlock
> on file truncation") unlock extree before hfsplus_free_extents(),
> and add check wheather extree is locked in hfsplus_free_extents().
>
> However, when operations such as hfsplus_file_release,
> hfsplus_setattr, hfsplus_unlink, and hfsplus_get_block are executed
> concurrently in different files, it is very likely to trigger the
> WARN_ON, which will lead syzbot and xfstest to consider it as an
> abnormality.
>
> The comment above this warning also describes one of the easy
> triggering situations, which can easily trigger and cause
> xfstest&syzbot to report errors.
>
> [task A] [task B]
> ->hfsplus_file_release
> ->hfsplus_file_truncate
> ->hfs_find_init
> ->mutex_lock
> ->mutex_unlock
> ->hfsplus_write_begin
> ->hfsplus_get_block
> ->hfsplus_file_extend
> ->hfsplus_ext_read_extent
> ->hfs_find_init
> ->mutex_lock
> ->hfsplus_free_extents
> WARN_ON(mutex_is_locked) !!!
I am not familiar with hfsplus, but hfsplus_file_release calls
hfsplus_file_truncate with inode lock, and hfsplus_write_begin can be
called from hfsplus_file_truncate and buffer write, which should also
grab inode lock, so that I think task B should be writeback process,
which call hfsplus_get_block.
And ->opencnt seems serves as something like link count of other fs, may
be we can move hfsplus_file_truncate to hfsplus_evict_inode, which can
only be called when all users of this inode disappear and writeback
process should also finished for this inode.
>
> Several threads could try to lock the shared extents tree.
> And warning can be triggered in one thread when another thread
> has locked the tree. This is the wrong behavior of the code and
> we need to remove the warning.
>
> Fixes: 31651c607151f ("hfsplus: avoid deadlock on file truncation")
> Reported-by: syzbot+8c0bc9f818702ff75b76@...kaller.appspotmail.com
> Closes: https://lore.kernel.org/all/00000000000057fa4605ef101c4c@google.com/
> Signed-off-by: Yangtao Li <frank.li@...o.com>
> ---
> fs/hfsplus/extents.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
> index a6d61685ae79..b1699b3c246a 100644
> --- a/fs/hfsplus/extents.c
> +++ b/fs/hfsplus/extents.c
> @@ -342,9 +342,6 @@ static int hfsplus_free_extents(struct super_block *sb,
> int i;
> int err = 0;
>
> - /* Mapping the allocation file may lock the extent tree */
> - WARN_ON(mutex_is_locked(&HFSPLUS_SB(sb)->ext_tree->tree_lock));
> -
> hfsplus_dump_extent(extent);
> for (i = 0; i < 8; extent++, i++) {
> count = be32_to_cpu(extent->block_count);
--
Regards
Powered by blists - more mailing lists