[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dd780c08-6265-ed80-6f76-c49325696631@gmx.com>
Date: Tue, 23 Aug 2022 18:58:33 +0800
From: Qu Wenruo <quwenruo.btrfs@....com>
To: Ye Bin <yebin10@...wei.com>, clm@...com, josef@...icpanda.com,
dsterba@...e.com, linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next] btrfs: fix use-after-free in btrfs_get_global_root
On 2022/8/23 09:59, Ye Bin wrote:
> Syzkaller reported UAF as follows:
> ==================================================================
> BUG: KASAN: use-after-free in btrfs_get_global_root+0x663/0xa10
> Read of size 4 at addr ffff88811ddbb3c0 by task kworker/u16:1/11
>
> CPU: 4 PID: 11 Comm: kworker/u16:1 Not tainted 6.0.0-rc1-next-20220822+ #2
> Workqueue: btrfs-qgroup-rescan btrfs_work_helper
> Call Trace:
> <TASK>
> dump_stack_lvl+0x6e/0x91
> print_report.cold+0xb2/0x6bb
> kasan_report+0xa8/0x130
> kasan_check_range+0x13f/0x1d0
> btrfs_get_global_root+0x663/0xa10
> btrfs_get_fs_root_commit_root+0xa5/0x150
> find_parent_nodes+0x92f/0x2990
> btrfs_find_all_roots_safe+0x12d/0x220
> btrfs_find_all_roots+0xbb/0xd0
This means, we're looking for all of the parents subvolumes roots of a
bytenr.
Function btrfs_get_fs_root_commit_root() can only be called for indirect
ref, and at that stage, we're pretty sure that we're grabbing a
subvolume root.
> btrfs_qgroup_rescan_worker+0x600/0xc30
> btrfs_work_helper+0xff/0x750
> process_one_work+0x52c/0x930
> worker_thread+0x352/0x8c0
> kthread+0x1b9/0x200
> ret_from_fork+0x22/0x30
> </TASK>
>
> Allocated by task 1895:
> kasan_save_stack+0x1e/0x40
> __kasan_kmalloc+0xa9/0xe0
> btrfs_alloc_root+0x40/0x820
> btrfs_create_tree+0xf8/0x500
But the allocated-by stack shows, the root belongs to quota root, the
only root that is created during qgroup enabling.
This doesn't sound sane to me.
The stacks are saying that, at least some tree blocks are shared by both
a subvolume and quota root, which is already against the on-disk format.
Mind to share the reproducer?
Thanks,
Qu
> btrfs_quota_enable+0x30a/0x1120
> btrfs_ioctl+0x50a3/0x59f0
> __x64_sys_ioctl+0x130/0x170
> do_syscall_64+0x3b/0x90
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> Freed by task 1895:
> kasan_save_stack+0x1e/0x40
> kasan_set_track+0x21/0x30
> kasan_set_free_info+0x20/0x40
> __kasan_slab_free+0x127/0x1c0
> kfree+0xa8/0x2d0
> btrfs_put_root+0x1ca/0x230
> btrfs_quota_enable+0x87c/0x1120
> btrfs_ioctl+0x50a3/0x59f0
> __x64_sys_ioctl+0x130/0x170
> do_syscall_64+0x3b/0x90
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
> ==================================================================
>
> Above issue may happens as follows:
> p1 p2
> btrfs_quota_enable
> spin_lock(&fs_info->qgroup_lock);
> fs_info->quota_root = quota_root;
> spin_unlock(&fs_info->qgroup_lock);
>
> ret = qgroup_rescan_init -> return error
> if (ret)
> btrfs_put_root(quota_root);
> kfree(root);
>
> if (ret) {
> ulist_free(fs_info->qgroup_ulist);
> fs_info->qgroup_ulist = NULL;
> btrfs_sysfs_del_qgroups(fs_info);
> } btrfs_qgroup_rescan_worker
> btrfs_find_all_roots
> btrfs_find_all_roots_safe
> find_parent_nodes
> btrfs_get_fs_root_commit_root
> btrfs_grab_root(fs_info->quota_root)
> -> quota_root already freed
>
> Syzkaller also reported another issue:
> ==================================================================
> BUG: KASAN: use-after-free in ulist_release+0x30/0xb3
> Read of size 8 at addr ffff88811413d048 by task rep/2921
>
> CPU: 3 PID: 2921 Comm: rep Not tainted 6.0.0-rc1-next-20220822+ #3
> rep[2921] cmdline: ./rep
> Call Trace:
> <TASK>
> dump_stack_lvl+0x6e/0x91
> print_report.cold+0xb2/0x6bb
> kasan_report+0xa8/0x130
> ulist_release+0x30/0xb3
> ulist_reinit+0x16/0x56
> btrfs_qgroup_free_refroot+0x288/0x3f0
> btrfs_qgroup_free_meta_all_pertrans+0xed/0x1e0
> commit_fs_roots+0x28c/0x430
> btrfs_commit_transaction+0x9a6/0x1b40
> btrfs_qgroup_rescan+0x7e/0x130
> btrfs_ioctl+0x48ed/0x59f0
> __x64_sys_ioctl+0x130/0x170
> do_syscall_64+0x3b/0x90
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
> </TASK>
>
> Allocated by task 2900:
> kasan_save_stack+0x1e/0x40
> __kasan_kmalloc+0xa9/0xe0
> ulist_alloc+0x5c/0xe0
> btrfs_quota_enable+0x1b2/0x1160
> btrfs_ioctl+0x50a3/0x59f0
> __x64_sys_ioctl+0x130/0x170
> do_syscall_64+0x3b/0x90
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> Freed by task 2900:
> kasan_save_stack+0x1e/0x40
> kasan_set_track+0x21/0x30
> kasan_set_free_info+0x20/0x40
> __kasan_slab_free+0x127/0x1c0
> kfree+0xa8/0x2d0
> ulist_free.cold+0x15/0x1a
> btrfs_quota_enable+0x8bf/0x1160
> btrfs_ioctl+0x50a3/0x59f0
> __x64_sys_ioctl+0x130/0x170
> do_syscall_64+0x3b/0x90
> entry_SYSCALL_64_after_hwframe+0x63/0xcd
> ==================================================================
>
> To solve above issues just set 'fs_info->quota_root' after qgroup_rescan_init
> return success.
>
> Signed-off-by: Ye Bin <yebin10@...wei.com>
> ---
> fs/btrfs/qgroup.c | 20 ++++++++++----------
> 1 file changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
> index db723c0026bd..16f0b038295a 100644
> --- a/fs/btrfs/qgroup.c
> +++ b/fs/btrfs/qgroup.c
> @@ -1158,18 +1158,18 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info)
> if (ret)
> goto out_free_path;
>
> - /*
> - * Set quota enabled flag after committing the transaction, to avoid
> - * deadlocks on fs_info->qgroup_ioctl_lock with concurrent snapshot
> - * creation.
> - */
> - spin_lock(&fs_info->qgroup_lock);
> - fs_info->quota_root = quota_root;
> - set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
> - spin_unlock(&fs_info->qgroup_lock);
> -
> ret = qgroup_rescan_init(fs_info, 0, 1);
> if (!ret) {
> + /*
> + * Set quota enabled flag after committing the transaction, to
> + * avoid deadlocks on fs_info->qgroup_ioctl_lock with concurrent
> + * snapshot creation.
> + */
> + spin_lock(&fs_info->qgroup_lock);
> + fs_info->quota_root = quota_root;
> + set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
> + spin_unlock(&fs_info->qgroup_lock);
> +
> qgroup_rescan_zero_tracking(fs_info);
> fs_info->qgroup_rescan_running = true;
> btrfs_queue_work(fs_info->qgroup_rescan_workers,
Powered by blists - more mailing lists