lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-id: <002b01cfa7bd$32774ad0$9765e070$@samsung.com>
Date:	Fri, 25 Jul 2014 12:00:57 +0800
From:	Chao Yu <chao2.yu@...sung.com>
To:	Jaegeuk Kim <jaegeuk@...nel.org>,
	Changman Lee <cm224.lee@...sung.com>
Cc:	tsyvarev@...ras.ru, Gu Zheng <guz.fnst@...fujitsu.com>,
	linux-f2fs-devel@...ts.sourceforge.net,
	linux-kernel@...r.kernel.org
Subject: [f2fs-dev][PATCH 1/2] f2fs: avoid use invalid mapping of node_inode
 when evict meta inode

Andrey Tsyvarev reported:
"Using memory error detector reveals the following use-after-free error 
in 3.15.0:

AddressSanitizer: heap-use-after-free in f2fs_evict_inode
Read of size 8 by thread T22279:
  [<ffffffffa02d8702>] f2fs_evict_inode+0x102/0x2e0 [f2fs] 
  [<ffffffff812359af>] evict+0x15f/0x290 
  [<     inlined    >] iput+0x196/0x280 iput_final 
  [<ffffffff812369a6>] iput+0x196/0x280 
  [<ffffffffa02dc416>] f2fs_put_super+0xd6/0x170 [f2fs] 
  [<ffffffff81210095>] generic_shutdown_super+0xc5/0x1b0 
  [<ffffffff812105fd>] kill_block_super+0x4d/0xb0 
  [<ffffffff81210a86>] deactivate_locked_super+0x66/0x80 
  [<ffffffff81211c98>] deactivate_super+0x68/0x80 
  [<ffffffff8123cc88>] mntput_no_expire+0x198/0x250 
  [<     inlined    >] SyS_umount+0xe9/0x1a0 SYSC_umount 
  [<ffffffff8123f1c9>] SyS_umount+0xe9/0x1a0 
  [<ffffffff81cc8df9>] system_call_fastpath+0x16/0x1b 

Freed by thread T3:
  [<ffffffffa02dc337>] f2fs_i_callback+0x27/0x30 [f2fs] 
  [<     inlined    >] rcu_process_callbacks+0x2d6/0x930 __rcu_reclaim 
  [<     inlined    >] rcu_process_callbacks+0x2d6/0x930 rcu_do_batch 
  [<     inlined    >] rcu_process_callbacks+0x2d6/0x930 invoke_rcu_callbacks 
  [<     inlined    >] rcu_process_callbacks+0x2d6/0x930 __rcu_process_callbacks
  [<ffffffff810fd266>] rcu_process_callbacks+0x2d6/0x930 
  [<ffffffff8107cce2>] __do_softirq+0x142/0x380 
  [<ffffffff8107cf50>] run_ksoftirqd+0x30/0x50 
  [<ffffffff810b2a87>] smpboot_thread_fn+0x197/0x280 
  [<ffffffff810a8238>] kthread+0x148/0x160 
  [<ffffffff81cc8d4c>] ret_from_fork+0x7c/0xb0 

Allocated by thread T22276:
  [<ffffffffa02dc7dd>] f2fs_alloc_inode+0x2d/0x170 [f2fs] 
  [<ffffffff81235e2a>] iget_locked+0x10a/0x230 
  [<ffffffffa02d7495>] f2fs_iget+0x35/0xa80 [f2fs] 
  [<ffffffffa02e2393>] f2fs_fill_super+0xb53/0xff0 [f2fs] 
  [<ffffffff81211bce>] mount_bdev+0x1de/0x240 
  [<ffffffffa02dbce0>] f2fs_mount+0x10/0x20 [f2fs] 
  [<ffffffff81212a85>] mount_fs+0x55/0x220 
  [<ffffffff8123c026>] vfs_kern_mount+0x66/0x200 
  [<     inlined    >] do_mount+0x2b4/0x1120 do_new_mount 
  [<ffffffff812400d4>] do_mount+0x2b4/0x1120 
  [<     inlined    >] SyS_mount+0xb2/0x110 SYSC_mount 
  [<ffffffff812414a2>] SyS_mount+0xb2/0x110 
  [<ffffffff81cc8df9>] system_call_fastpath+0x16/0x1b 

The buggy address ffff8800587866c8 is located 48 bytes inside
  of 680-byte region [ffff880058786698, ffff880058786940)

Memory state around the buggy address:
  ffff880058786100: ffffffff ffffffff ffffffff ffffffff
  ffff880058786200: ffffffff ffffffff ffffffrr rrrrrrrr
  ffff880058786300: rrrrrrrr rrffffff ffffffff ffffffff
  ffff880058786400: ffffffff ffffffff ffffffff ffffffff
  ffff880058786500: ffffffff ffffffff ffffffff fffffffr
 >ffff880058786600: rrrrrrrr rrrrrrrr rrrfffff ffffffff
                                                ^
  ffff880058786700: ffffffff ffffffff ffffffff ffffffff
  ffff880058786800: ffffffff ffffffff ffffffff ffffffff
  ffff880058786900: ffffffff rrrrrrrr rrrrrrrr rrrr....
  ffff880058786a00: ........ ........ ........ ........
  ffff880058786b00: ........ ........ ........ ........
Legend:
  f - 8 freed bytes
  r - 8 redzone bytes
  . - 8 allocated bytes
  x=1..7 - x allocated bytes + (8-x) redzone bytes

Investigation shows, that f2fs_evict_inode, when called for 
'meta_inode', uses invalidate_mapping_pages() for 'node_inode'.
But 'node_inode' is deleted before 'meta_inode' in f2fs_put_super via 
iput().

It seems that in common usage scenario this use-after-free is benign, 
because 'node_inode' remains partially valid data even after 
kmem_cache_free().
But things may change if, while 'meta_inode' is evicted in one f2fs 
filesystem, another (mounted) f2fs filesystem requests inode from cache, 
and formely
'node_inode' of the first filesystem is returned."

Nids for both meta_inode and node_inode are reservation, so it's not necessary
for us to invalidate pages which will never be allocated.
To fix this issue, let's skipping needlessly invalidating pages for
{meta,node}_inode in f2fs_evict_inode.

Reported-by: Andrey Tsyvarev <tsyvarev@...ras.ru>
Tested-by: Andrey Tsyvarev <tsyvarev@...ras.ru>
Signed-off-by: Gu Zheng <guz.fnst@...fujitsu.com>
Signed-off-by: Chao Yu <chao2.yu@...sung.com>
---
 fs/f2fs/inode.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
index 2cf6962..cafba3c 100644
--- a/fs/f2fs/inode.c
+++ b/fs/f2fs/inode.c
@@ -273,7 +273,7 @@ void f2fs_evict_inode(struct inode *inode)
 
 	if (inode->i_ino == F2FS_NODE_INO(sbi) ||
 			inode->i_ino == F2FS_META_INO(sbi))
-		goto no_delete;
+		goto out_clear;
 
 	f2fs_bug_on(get_dirty_dents(inode));
 	remove_dirty_dir_inode(inode);
@@ -295,6 +295,7 @@ void f2fs_evict_inode(struct inode *inode)
 
 	sb_end_intwrite(inode->i_sb);
 no_delete:
-	clear_inode(inode);
 	invalidate_mapping_pages(NODE_MAPPING(sbi), inode->i_ino, inode->i_ino);
+out_clear:
+	clear_inode(inode);
 }
-- 
2.0.1.474.g72c7794


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ