[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250108151520.2515903-1-hsiangkao@linux.alibaba.com>
Date: Wed, 8 Jan 2025 23:15:19 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: stable@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: linux-erofs@...ts.ozlabs.org,
LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH 6.6.y 1/2] erofs: handle overlapped pclusters out of crafted images properly
commit 9e2f9d34dd12e6e5b244ec488bcebd0c2d566c50 upstream.
syzbot reported a task hang issue due to a deadlock case where it is
waiting for the folio lock of a cached folio that will be used for
cache I/Os.
After looking into the crafted fuzzed image, I found it's formed with
several overlapped big pclusters as below:
Ext: logical offset | length : physical offset | length
0: 0.. 16384 | 16384 : 151552.. 167936 | 16384
1: 16384.. 32768 | 16384 : 155648.. 172032 | 16384
2: 32768.. 49152 | 16384 : 537223168.. 537239552 | 16384
...
Here, extent 0/1 are physically overlapped although it's entirely
_impossible_ for normal filesystem images generated by mkfs.
First, managed folios containing compressed data will be marked as
up-to-date and then unlocked immediately (unlike in-place folios) when
compressed I/Os are complete. If physical blocks are not submitted in
the incremental order, there should be separate BIOs to avoid dependency
issues. However, the current code mis-arranges z_erofs_fill_bio_vec()
and BIO submission which causes unexpected BIO waits.
Second, managed folios will be connected to their own pclusters for
efficient inter-queries. However, this is somewhat hard to implement
easily if overlapped big pclusters exist. Again, these only appear in
fuzzed images so let's simply fall back to temporary short-lived pages
for correctness.
Additionally, it justifies that referenced managed folios cannot be
truncated for now and reverts part of commit 2080ca1ed3e4 ("erofs: tidy
up `struct z_erofs_bvec`") for simplicity although it shouldn't be any
difference.
Reported-by: syzbot+4fc98ed414ae63d1ada2@...kaller.appspotmail.com
Reported-by: syzbot+de04e06b28cfecf2281c@...kaller.appspotmail.com
Reported-by: syzbot+c8c8238b394be4a1087d@...kaller.appspotmail.com
Tested-by: syzbot+4fc98ed414ae63d1ada2@...kaller.appspotmail.com
Closes: https://lore.kernel.org/r/0000000000002fda01061e334873@google.com
Fixes: 8e6c8fa9f2e9 ("erofs: enable big pcluster feature")
Link: https://lore.kernel.org/r/20240910070847.3356592-1-hsiangkao@linux.alibaba.com
Signed-off-by: Gao Xiang <hsiangkao@...ux.alibaba.com>
---
fs/erofs/zdata.c | 59 +++++++++++++++++++++++++-----------------------
1 file changed, 31 insertions(+), 28 deletions(-)
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index 1c0e6167d8e7..9fa07436a4da 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -1483,14 +1483,13 @@ static struct page *pickup_page_for_submission(struct z_erofs_pcluster *pcl,
goto out;
lock_page(page);
-
- /* only true if page reclaim goes wrong, should never happen */
- DBG_BUGON(justfound && PagePrivate(page));
-
- /* the page is still in manage cache */
- if (page->mapping == mc) {
+ if (likely(page->mapping == mc)) {
WRITE_ONCE(pcl->compressed_bvecs[nr].page, page);
+ /*
+ * The cached folio is still in managed cache but without
+ * a valid `->private` pcluster hint. Let's reconnect them.
+ */
if (!PagePrivate(page)) {
/*
* impossible to be !PagePrivate(page) for
@@ -1504,22 +1503,24 @@ static struct page *pickup_page_for_submission(struct z_erofs_pcluster *pcl,
SetPagePrivate(page);
}
- /* no need to submit io if it is already up-to-date */
- if (PageUptodate(page)) {
- unlock_page(page);
- page = NULL;
+ if (likely(page->private == (unsigned long)pcl)) {
+ /* don't submit cache I/Os again if already uptodate */
+ if (PageUptodate(page)) {
+ unlock_page(page);
+ page = NULL;
+
+ }
+ goto out;
}
- goto out;
+ /*
+ * Already linked with another pcluster, which only appears in
+ * crafted images by fuzzers for now. But handle this anyway.
+ */
+ tocache = false; /* use temporary short-lived pages */
+ } else {
+ DBG_BUGON(1); /* referenced managed folios can't be truncated */
+ tocache = true;
}
-
- /*
- * the managed page has been truncated, it's unsafe to
- * reuse this one, let's allocate a new cache-managed page.
- */
- DBG_BUGON(page->mapping);
- DBG_BUGON(!justfound);
-
- tocache = true;
unlock_page(page);
put_page(page);
out_allocpage:
@@ -1677,16 +1678,11 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
end = cur + pcl->pclusterpages;
do {
- struct page *page;
-
- page = pickup_page_for_submission(pcl, i++,
- &f->pagepool, mc);
- if (!page)
- continue;
+ struct page *page = NULL;
if (bio && (cur != last_index + 1 ||
last_bdev != mdev.m_bdev)) {
-submit_bio_retry:
+drain_io:
submit_bio(bio);
if (memstall) {
psi_memstall_leave(&pflags);
@@ -1695,6 +1691,13 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
bio = NULL;
}
+ if (!page) {
+ page = pickup_page_for_submission(pcl, i++,
+ &f->pagepool, mc);
+ if (!page)
+ continue;
+ }
+
if (unlikely(PageWorkingset(page)) && !memstall) {
psi_memstall_enter(&pflags);
memstall = 1;
@@ -1715,7 +1718,7 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
}
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE)
- goto submit_bio_retry;
+ goto drain_io;
last_index = cur;
bypass = false;
--
2.43.5
Powered by blists - more mailing lists