lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <frpohbqgpyhd6fkwkd5h6efqiph27mgmcbex3bipmksyc2vocp@tfz6oynigmgi>
Date: Sat, 21 Sep 2024 14:47:27 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: Piotr Zalewski <pZ010001011111@...ton.me>
Cc: linux-bcachefs@...r.kernel.org, linux-kernel@...r.kernel.org, 
	skhan@...uxfoundation.org
Subject: Re: [PATCH] bcachefs: add GFP_ZERO flag in btree_bounce_alloc

On Mon, Sep 16, 2024 at 10:47:57PM GMT, Piotr Zalewski wrote:
> Add __GFP_ZERO flag to kvmalloc call in btree_bounce_alloc to mitigate
> later uinit-value use KMSAN warning[1].
> 
> After applying the patch reproducer still triggers stack overflow[2] but
> it seems unrelated to the uninit-value use warning. After further
> investigation it was found that stack overflow occurs because KMSAN adds
> additional function calls. Backtrace of where the stack magic number gets 
> smashed was added as a reply to syzkaller thread[3].
> 
> I confirmed that task's stack magic number gets smashed after the code path
> where KSMAN detects uninit-value use is executed, so it can be assumed that
> it doesn't contribute in any way to uninit-value use detection.
> 
> [1] https://syzkaller.appspot.com/bug?extid=6f655a60d3244d0c6718
> [2] https://lore.kernel.org/lkml/66e57e46.050a0220.115905.0002.GAE@google.com
> [3] https://lore.kernel.org/all/rVaWgPULej8K7HqMPNIu8kVNyXNjjCiTB-QBtItLFBmk0alH6fV2tk4joVPk97Evnuv4ZRDd8HB5uDCkiFG6u81xKdzDj-KrtIMJSlF6Kt8=@proton.me
> 
> Signed-off-by: Piotr Zalewski <pZ010001011111@...ton.me>

Oh hey, nice find :)

We should be able to fix this in a more performant way, though; btree
node resort is a path where we do care about performance, we don't want
to touch the whole buffer more times than necessary.

Can you try zeroing out the portion after what we consumed, after we
sort into the bounce buffer?

> ---
>  fs/bcachefs/btree_io.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/bcachefs/btree_io.c b/fs/bcachefs/btree_io.c
> index 56ea9a77cd4a..3ac8b37f97d7 100644
> --- a/fs/bcachefs/btree_io.c
> +++ b/fs/bcachefs/btree_io.c
> @@ -121,7 +121,7 @@ static void *btree_bounce_alloc(struct bch_fs *c, size_t size,
>  	BUG_ON(size > c->opts.btree_node_size);
>  
>  	*used_mempool = false;
> -	p = kvmalloc(size, __GFP_NOWARN|GFP_NOWAIT);
> +	p = kvmalloc(size, __GFP_ZERO|__GFP_NOWARN|GFP_NOWAIT);
>  	if (!p) {
>  		*used_mempool = true;
>  		p = mempool_alloc(&c->btree_bounce_pool, GFP_NOFS);
> -- 
> 2.46.0
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ