[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240710-bug11-v1-1-aa02297fbbc9@gmail.com>
Date: Wed, 10 Jul 2024 20:42:47 -0700
From: Pei Li <peili.dev@...il.com>
To: Chris Mason <clm@...com>, Josef Bacik <josef@...icpanda.com>,
David Sterba <dsterba@...e.com>
Cc: linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
skhan@...uxfoundation.org, syzkaller-bugs@...glegroups.com,
linux-kernel-mentees@...ts.linuxfoundation.org,
syzbot+853d80cba98ce1157ae6@...kaller.appspotmail.com,
Pei Li <peili.dev@...il.com>
Subject: [PATCH] btrfs: Fix slab-use-after-free Read in add_ra_bio_pages
We are accessing the start and len field in em after it is free'd.
This patch stores the values that we are going to access from em before
it was free'd so we won't access free'd memory.
Reported-by: syzbot+853d80cba98ce1157ae6@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
Signed-off-by: Pei Li <peili.dev@...il.com>
---
Syzbot reported the following error:
BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
This is because we are reading the values from em right after freeing it
before through free_extent_map(em).
This patch stores the values that we are going to access from em before
it was free'd so we won't access free'd memory.
---
fs/btrfs/compression.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 6441e47d8a5e..42b528aee63b 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -449,6 +449,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
u64 page_end;
u64 pg_index = cur >> PAGE_SHIFT;
u32 add_size;
+ u64 start = 0, len = 0;
if (pg_index > end_index)
break;
@@ -500,12 +501,17 @@ static noinline int add_ra_bio_pages(struct inode *inode,
em = lookup_extent_mapping(em_tree, cur, page_end + 1 - cur);
read_unlock(&em_tree->lock);
+ if (em) {
+ start = em->start;
+ len = em->len;
+ }
+
/*
* At this point, we have a locked page in the page cache for
* these bytes in the file. But, we have to make sure they map
* to this compressed extent on disk.
*/
- if (!em || cur < em->start ||
+ if (!em || cur < start ||
(cur + fs_info->sectorsize > extent_map_end(em)) ||
(em->block_start >> SECTOR_SHIFT) != orig_bio->bi_iter.bi_sector) {
free_extent_map(em);
@@ -526,7 +532,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
}
}
- add_size = min(em->start + em->len, page_end + 1) - cur;
+ add_size = min(start + len, page_end + 1) - cur;
ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
if (ret != add_size) {
unlock_extent(tree, cur, page_end, NULL);
---
base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
change-id: 20240710-bug11-a8ac18afb724
Best regards,
--
Pei Li <peili.dev@...il.com>
Powered by blists - more mailing lists