lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <uDWtvpxzXCkjTZVPdrrhrF_wVv8J6JS1gb1Cy_l6uM6houxjn23qXxT4u8YxzVRJrh0LnMIBOa8Zl_NIVyWZPblDTblYi_VJ357130uk0q4=@proton.me>
Date: Sun, 22 Sep 2024 12:40:07 +0000
From: Piotr Zalewski <pZ010001011111@...ton.me>
To: Kent Overstreet <kent.overstreet@...ux.dev>
Cc: linux-bcachefs@...r.kernel.org, linux-kernel@...r.kernel.org, skhan@...uxfoundation.org
Subject: Re: [PATCH] bcachefs: add GFP_ZERO flag in btree_bounce_alloc


On Saturday, September 21st, 2024 at 8:47 PM, Kent Overstreet <kent.overstreet@...ux.dev> wrote:

> On Mon, Sep 16, 2024 at 10:47:57PM GMT, Piotr Zalewski wrote:
> 
> > Add __GFP_ZERO flag to kvmalloc call in btree_bounce_alloc to mitigate
> > later uinit-value use KMSAN warning[1].
> > 
> > After applying the patch reproducer still triggers stack overflow[2] but
> > it seems unrelated to the uninit-value use warning. After further
> > investigation it was found that stack overflow occurs because KMSAN adds
> > additional function calls. Backtrace of where the stack magic number gets
> > smashed was added as a reply to syzkaller thread[3].
> > 
> > I confirmed that task's stack magic number gets smashed after the code path
> > where KSMAN detects uninit-value use is executed, so it can be assumed that
> > it doesn't contribute in any way to uninit-value use detection.
> > 
> > [1] https://syzkaller.appspot.com/bug?extid=6f655a60d3244d0c6718
> > [2] https://lore.kernel.org/lkml/66e57e46.050a0220.115905.0002.GAE@google.com
> > [3] https://lore.kernel.org/all/rVaWgPULej8K7HqMPNIu8kVNyXNjjCiTB-QBtItLFBmk0alH6fV2tk4joVPk97Evnuv4ZRDd8HB5uDCkiFG6u81xKdzDj-KrtIMJSlF6Kt8=@proton.me
> > 
> > Signed-off-by: Piotr Zalewski pZ010001011111@...ton.me
> 
> 
> Oh hey, nice find :)

Hi!

> We should be able to fix this in a more performant way, though; btree
> node resort is a path where we do care about performance, we don't want
> to touch the whole buffer more times than necessary.
> 
> Can you try zeroing out the portion after what we consumed, after we
> sort into the bounce buffer?

Do you mean something like this? :
diff --git a/fs/bcachefs/btree_io.c b/fs/bcachefs/btree_io.c
index 56ea9a77cd4a..c737ece6f628 100644
--- a/fs/bcachefs/btree_io.c
+++ b/fs/bcachefs/btree_io.c
@@ -1195,6 +1195,10 @@ int bch2_btree_node_read_done(struct bch_fs *c, struct bch_dev *ca,
 	set_btree_bset(b, b->set, &b->data->keys);
 
 	b->nr = bch2_key_sort_fix_overlapping(c, &sorted->keys, iter);
+	memset((uint8_t*)(sorted + 1) + b->nr.live_u64s * sizeof(u64), 0,
+			btree_buf_bytes(b) -
+			sizeof(struct btree_node) -
+			b->nr.live_u64s * sizeof(u64));
 
 	u64s = le16_to_cpu(sorted->keys.u64s);
 	*sorted = *b->data;

I tested that above doesn't trigger uinit-value usage.

Best regards, Piotr Zalewski

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ