[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac84eefc-e024-40fb-a92d-3109f686d122@fintech.ru>
Date: Wed, 15 May 2024 08:18:19 -0700
From: Nikita Zhandarovich <n.zhandarovich@...tech.ru>
To: Eric Van Hensbergen <ericvh@...nel.org>
CC: Latchesar Ionkov <lucho@...kov.net>, Dominique Martinet
<asmadeus@...ewreck.org>, Christian Schoenebeck <linux_oss@...debyte.com>,
<v9fs@...ts.linux.dev>, <linux-kernel@...r.kernel.org>,
<lvc-project@...uxtesting.org>,
<syzbot+ff14db38f56329ef68df@...kaller.appspotmail.com>
Subject: Re: [PATCH net v2] net/9p: fix uninit-value in p9_client_rpc()
On 4/8/24 07:10, Nikita Zhandarovich wrote:
> Syzbot with the help of KMSAN reported the following error:
>
> BUG: KMSAN: uninit-value in trace_9p_client_res include/trace/events/9p.h:146 [inline]
> BUG: KMSAN: uninit-value in p9_client_rpc+0x1314/0x1340 net/9p/client.c:754
> trace_9p_client_res include/trace/events/9p.h:146 [inline]
> p9_client_rpc+0x1314/0x1340 net/9p/client.c:754
> p9_client_create+0x1551/0x1ff0 net/9p/client.c:1031
> v9fs_session_init+0x1b9/0x28e0 fs/9p/v9fs.c:410
> v9fs_mount+0xe2/0x12b0 fs/9p/vfs_super.c:122
> legacy_get_tree+0x114/0x290 fs/fs_context.c:662
> vfs_get_tree+0xa7/0x570 fs/super.c:1797
> do_new_mount+0x71f/0x15e0 fs/namespace.c:3352
> path_mount+0x742/0x1f20 fs/namespace.c:3679
> do_mount fs/namespace.c:3692 [inline]
> __do_sys_mount fs/namespace.c:3898 [inline]
> __se_sys_mount+0x725/0x810 fs/namespace.c:3875
> __x64_sys_mount+0xe4/0x150 fs/namespace.c:3875
> do_syscall_64+0xd5/0x1f0
> entry_SYSCALL_64_after_hwframe+0x6d/0x75
>
> Uninit was created at:
> __alloc_pages+0x9d6/0xe70 mm/page_alloc.c:4598
> __alloc_pages_node include/linux/gfp.h:238 [inline]
> alloc_pages_node include/linux/gfp.h:261 [inline]
> alloc_slab_page mm/slub.c:2175 [inline]
> allocate_slab mm/slub.c:2338 [inline]
> new_slab+0x2de/0x1400 mm/slub.c:2391
> ___slab_alloc+0x1184/0x33d0 mm/slub.c:3525
> __slab_alloc mm/slub.c:3610 [inline]
> __slab_alloc_node mm/slub.c:3663 [inline]
> slab_alloc_node mm/slub.c:3835 [inline]
> kmem_cache_alloc+0x6d3/0xbe0 mm/slub.c:3852
> p9_tag_alloc net/9p/client.c:278 [inline]
> p9_client_prepare_req+0x20a/0x1770 net/9p/client.c:641
> p9_client_rpc+0x27e/0x1340 net/9p/client.c:688
> p9_client_create+0x1551/0x1ff0 net/9p/client.c:1031
> v9fs_session_init+0x1b9/0x28e0 fs/9p/v9fs.c:410
> v9fs_mount+0xe2/0x12b0 fs/9p/vfs_super.c:122
> legacy_get_tree+0x114/0x290 fs/fs_context.c:662
> vfs_get_tree+0xa7/0x570 fs/super.c:1797
> do_new_mount+0x71f/0x15e0 fs/namespace.c:3352
> path_mount+0x742/0x1f20 fs/namespace.c:3679
> do_mount fs/namespace.c:3692 [inline]
> __do_sys_mount fs/namespace.c:3898 [inline]
> __se_sys_mount+0x725/0x810 fs/namespace.c:3875
> __x64_sys_mount+0xe4/0x150 fs/namespace.c:3875
> do_syscall_64+0xd5/0x1f0
> entry_SYSCALL_64_after_hwframe+0x6d/0x75
>
> If p9_check_errors() fails early in p9_client_rpc(), req->rc.tag
> will not be properly initialized. However, trace_9p_client_res()
> ends up trying to print it out anyway before p9_client_rpc()
> finishes.
>
> Fix this issue by assigning default values to p9_fcall fields
> such as 'tag' and (just in case KMSAN unearths something new) 'id'
> during the tag allocation stage.
>
> Reported-and-tested-by: syzbot+ff14db38f56329ef68df@...kaller.appspotmail.com
> Fixes: 348b59012e5c ("net/9p: Convert net/9p protocol dumps to tracepoints")
> Signed-off-by: Nikita Zhandarovich <n.zhandarovich@...tech.ru>
> ---
> v2: change fc->tag init value from 0 to P9_NOTAG per Dominique
> Martinet's <asmadeus@...ewreck.org> helpful suggestion.
> Link: https://lore.kernel.org/all/ZhNVMivKCCq6wir0@codewreck.org/
>
> net/9p/client.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/net/9p/client.c b/net/9p/client.c
> index f7e90b4769bb..b05f73c291b4 100644
> --- a/net/9p/client.c
> +++ b/net/9p/client.c
> @@ -235,6 +235,8 @@ static int p9_fcall_init(struct p9_client *c, struct p9_fcall *fc,
> if (!fc->sdata)
> return -ENOMEM;
> fc->capacity = alloc_msize;
> + fc->id = 0;
> + fc->tag = P9_NOTAG;
> return 0;
> }
>
Hi Dominique,
Gentle ping on this issue as it is still open. Thanks in advance :)
Regards,
Nikita
Powered by blists - more mailing lists