[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20170706222414.GA31138@google.com>
Date: Thu, 6 Jul 2017 15:24:14 -0700
From: Eric Biggers <ebiggers@...gle.com>
To: Alexander Potapenko <glider@...gle.com>
Cc: dvyukov@...gle.com, kcc@...gle.com, viro@...iv.linux.org.uk,
tytso@....edu, linux-kernel@...r.kernel.org,
linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] mbcache: initialize entry->e_referenced in
mb_cache_entry_create()
+Cc linux-ext4
On Thu, Jul 06, 2017 at 08:27:25PM +0200, Alexander Potapenko wrote:
> KMSAN reported use of uninitialized |entry->e_referenced| in a condition
> in mb_cache_shrink():
>
> ==================================================================
> BUG: KMSAN: use of uninitialized memory in mb_cache_shrink+0x3b4/0xc50 fs/mbcache.c:287
> CPU: 2 PID: 816 Comm: kswapd1 Not tainted 4.11.0-rc5+ #2877
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs
> 01/01/2011
> Call Trace:
> __dump_stack lib/dump_stack.c:16 [inline]
> dump_stack+0x172/0x1c0 lib/dump_stack.c:52
> kmsan_report+0x12a/0x180 mm/kmsan/kmsan.c:927
> __msan_warning_32+0x61/0xb0 mm/kmsan/kmsan_instr.c:469
> mb_cache_shrink+0x3b4/0xc50 fs/mbcache.c:287
> mb_cache_scan+0x67/0x80 fs/mbcache.c:321
> do_shrink_slab mm/vmscan.c:397 [inline]
> shrink_slab+0xc3d/0x12d0 mm/vmscan.c:500
> shrink_node+0x208f/0x2fd0 mm/vmscan.c:2603
> kswapd_shrink_node mm/vmscan.c:3172 [inline]
> balance_pgdat mm/vmscan.c:3289 [inline]
> kswapd+0x160f/0x2850 mm/vmscan.c:3478
> kthread+0x46c/0x5f0 kernel/kthread.c:230
> ret_from_fork+0x29/0x40 arch/x86/entry/entry_64.S:430
> chained origin:
> save_stack_trace+0x37/0x40 arch/x86/kernel/stacktrace.c:59
> kmsan_save_stack_with_flags mm/kmsan/kmsan.c:302 [inline]
> kmsan_save_stack mm/kmsan/kmsan.c:317 [inline]
> kmsan_internal_chain_origin+0x12a/0x1f0 mm/kmsan/kmsan.c:547
> __msan_store_shadow_origin_1+0xac/0x110 mm/kmsan/kmsan_instr.c:257
> mb_cache_entry_create+0x3b3/0xc60 fs/mbcache.c:95
> ext4_xattr_cache_insert fs/ext4/xattr.c:1647 [inline]
> ext4_xattr_block_set+0x4c82/0x5530 fs/ext4/xattr.c:1022
> ext4_xattr_set_handle+0x1332/0x20a0 fs/ext4/xattr.c:1252
> ext4_xattr_set+0x4d2/0x680 fs/ext4/xattr.c:1306
> ext4_xattr_trusted_set+0x8d/0xa0 fs/ext4/xattr_trusted.c:36
> __vfs_setxattr+0x703/0x790 fs/xattr.c:149
> __vfs_setxattr_noperm+0x27a/0x6f0 fs/xattr.c:180
> vfs_setxattr fs/xattr.c:223 [inline]
> setxattr+0x6ae/0x790 fs/xattr.c:449
> path_setxattr+0x1eb/0x380 fs/xattr.c:468
> SYSC_lsetxattr+0x8d/0xb0 fs/xattr.c:490
> SyS_lsetxattr+0x77/0xa0 fs/xattr.c:486
> entry_SYSCALL_64_fastpath+0x13/0x94
> origin:
> save_stack_trace+0x37/0x40 arch/x86/kernel/stacktrace.c:59
> kmsan_save_stack_with_flags mm/kmsan/kmsan.c:302 [inline]
> kmsan_internal_poison_shadow+0xb1/0x1a0 mm/kmsan/kmsan.c:198
> kmsan_kmalloc+0x7f/0xe0 mm/kmsan/kmsan.c:337
> kmem_cache_alloc+0x1c2/0x1e0 mm/slub.c:2766
> mb_cache_entry_create+0x283/0xc60 fs/mbcache.c:86
> ext4_xattr_cache_insert fs/ext4/xattr.c:1647 [inline]
> ext4_xattr_block_set+0x4c82/0x5530 fs/ext4/xattr.c:1022
> ext4_xattr_set_handle+0x1332/0x20a0 fs/ext4/xattr.c:1252
> ext4_xattr_set+0x4d2/0x680 fs/ext4/xattr.c:1306
> ext4_xattr_trusted_set+0x8d/0xa0 fs/ext4/xattr_trusted.c:36
> __vfs_setxattr+0x703/0x790 fs/xattr.c:149
> __vfs_setxattr_noperm+0x27a/0x6f0 fs/xattr.c:180
> vfs_setxattr fs/xattr.c:223 [inline]
> setxattr+0x6ae/0x790 fs/xattr.c:449
> path_setxattr+0x1eb/0x380 fs/xattr.c:468
> SYSC_lsetxattr+0x8d/0xb0 fs/xattr.c:490
> SyS_lsetxattr+0x77/0xa0 fs/xattr.c:486
> entry_SYSCALL_64_fastpath+0x13/0x94
> ==================================================================
>
> Signed-off-by: Alexander Potapenko <glider@...gle.com>
> ---
> fs/mbcache.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index b19be429d655..fdfe8933ac6b 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -93,6 +93,7 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
> entry->e_key = key;
> entry->e_block = block;
> entry->e_reusable = reusable;
> + entry->e_referenced = 0;
> head = mb_cache_entry_head(cache, key);
> hlist_bl_lock(head);
> hlist_bl_for_each_entry(dup, dup_node, head, e_hash_list) {
> --
Nice catch! This looks fine. I was wondering whether e_referenced should be
initialized to 1 instead, but I'm thinking 0 is better so that only entries that
have actually been deduplicated against get an extra pass through the list.
Reviewed-by: Eric Biggers <ebiggers@...gle.com>
- Eric
Powered by blists - more mailing lists