[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <df7ede88-86a2-1c6d-0343-c97a851b9bdf@gmx.com>
Date: Fri, 11 Nov 2022 17:09:18 +0800
From: Qu Wenruo <quwenruo.btrfs@....com>
To: ChenXiaoSong <chenxiaosong2@...wei.com>, clm@...com,
josef@...icpanda.com, dsterba@...e.com
Cc: linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
zhangxiaoxu5@...wei.com
Subject: Re: [PATCH v2] btrfs: qgroup: fix sleep from invalid context bug in
update_qgroup_limit_item()
On 2022/11/11 17:02, ChenXiaoSong wrote:
> Syzkaller reported BUG as follows:
>
> BUG: sleeping function called from invalid context at
> include/linux/sched/mm.h:274
> Call Trace:
> <TASK>
> dump_stack_lvl+0xcd/0x134
> __might_resched.cold+0x222/0x26b
> kmem_cache_alloc+0x2e7/0x3c0
> update_qgroup_limit_item+0xe1/0x390
> btrfs_qgroup_inherit+0x147b/0x1ee0
> create_subvol+0x4eb/0x1710
> btrfs_mksubvol+0xfe5/0x13f0
> __btrfs_ioctl_snap_create+0x2b0/0x430
> btrfs_ioctl_snap_create_v2+0x25a/0x520
> btrfs_ioctl+0x2a1c/0x5ce0
> __x64_sys_ioctl+0x193/0x200
> do_syscall_64+0x35/0x80
>
> Fix this by delaying the limit item updates until unlock the spin lock.
The overall idea is way better now.
But sorry that I didn't immediately find out the best solution at the
first glance.
In fact, your v2 path just lets me remember what is the correct way to
handle such situation.
>
> Signed-off-by: ChenXiaoSong <chenxiaosong2@...wei.com>
> ---
> fs/btrfs/qgroup.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
> index 9334c3157c22..2792d63c0da4 100644
> --- a/fs/btrfs/qgroup.c
> +++ b/fs/btrfs/qgroup.c
[...]
> @@ -2985,6 +2978,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
> dstgroup->max_excl = srcgroup->max_excl;
> dstgroup->rsv_rfer = srcgroup->rsv_rfer;
> dstgroup->rsv_excl = srcgroup->rsv_excl;
> + update_limit = true;
>
> qgroup_dirty(fs_info, dstgroup);
> qgroup_dirty(fs_info, srcgroup);
You caught the "if (srcid)" branch, which also changed the
limit/rfer/excl numbers of the destination qgroup, but didn't call
update_qgroup_limit_item().
But if you check the function qgroup_dirty() a little deeper, you can
find out that, qgroup_dirty() will move the target qgroup into
fs_info->dirty_qgroups list.
And later at btrfs_run_qgroups() (which is called during
btrfs_commit_transaction(), and also after create_pending_snapshots()),
we will update the quota tree to reflect the result.
So this means, all you need is just call qgroup_dirty() on @dstqgroup,
and call it a day.
Everything else will be properly handled.
Sorry I didn't notice this earlier...
Thanks,
Qu
> @@ -3053,6 +3047,12 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
>
> unlock:
> spin_unlock(&fs_info->qgroup_lock);
> + if (update_limit && update_qgroup_limit_item(trans, dstgroup)) {
> + qgroup_mark_inconsistent(fs_info);
> + btrfs_info(fs_info,
> + "unable to update quota limit for %llu",
> + dstgroup->qgroupid);
> + }
> if (!ret)
> ret = btrfs_sysfs_add_one_qgroup(fs_info, dstgroup);
> out:
Powered by blists - more mailing lists