[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240710-bug11-v2-1-e7bc61f32e5d@gmail.com>
Date: Wed, 10 Jul 2024 21:29:21 -0700
From: Pei Li <peili.dev@...il.com>
To: Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>,
David Sterba <dsterba@...e.com>, Qu Wenruo <wqu@...e.com>
Cc: linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, skhan@...uxfoundation.org,
syzkaller-bugs@...glegroups.com,
linux-kernel-mentees@...ts.linuxfoundation.org,
syzbot+853d80cba98ce1157ae6@...kaller.appspotmail.com,
Pei Li <peili.dev@...il.com>
Subject: [PATCH v2] btrfs: Fix slab-use-after-free Read in add_ra_bio_pages
We are accessing the start and len field in em after it is free'd.
This patch moves the line accessing the free'd values in em before
they were free'd so we won't access free'd memory.
Reported-by: syzbot+853d80cba98ce1157ae6@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
Signed-off-by: Pei Li <peili.dev@...il.com>
---
Syzbot reported the following error:
BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
This is because we are reading the values from em right after freeing it
before through free_extent_map(em).
This patch moves the line accessing the free'd values in em before
they were free'd so we won't access free'd memory.
Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")
---
Changes in v2:
- Adapt Qu's suggestion to move the read-after-free line before freeing
- Cc stable kernel
- Link to v1: https://lore.kernel.org/r/20240710-bug11-v1-1-aa02297fbbc9@gmail.com
---
fs/btrfs/compression.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 6441e47d8a5e..f271df10ef1c 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
put_page(page);
break;
}
+ add_size = min(em->start + em->len, page_end + 1) - cur;
+
free_extent_map(em);
if (page->index == end_index) {
@@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode,
}
}
- add_size = min(em->start + em->len, page_end + 1) - cur;
ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
if (ret != add_size) {
unlock_extent(tree, cur, page_end, NULL);
---
base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
change-id: 20240710-bug11-a8ac18afb724
Best regards,
--
Pei Li <peili.dev@...il.com>
Powered by blists - more mailing lists