[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260126022715.404984-2-CFSworks@gmail.com>
Date: Sun, 25 Jan 2026 18:27:14 -0800
From: Sam Edwards <cfsworks@...il.com>
To: Xiubo Li <xiubli@...hat.com>,
Ilya Dryomov <idryomov@...il.com>
Cc: Viacheslav Dubeyko <Slava.Dubeyko@....com>,
Christian Brauner <brauner@...nel.org>,
Milind Changire <mchangir@...hat.com>,
Jeff Layton <jlayton@...nel.org>,
ceph-devel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Sam Edwards <CFSworks@...il.com>,
stable@...r.kernel.org
Subject: [PATCH 1/2] ceph: free page array when ceph_submit_write() fails
If `locked_pages` is zero, the page array must not be allocated:
ceph_process_folio_batch() uses `locked_pages` to decide when to
allocate `pages`, and redundant allocations trigger
ceph_allocate_page_array()'s BUG_ON(), resulting in a worker oops (and
writeback stall) or even a kernel panic. Consequently, the main loop in
ceph_writepages_start() assumes that the lifetime of `pages` is confined
to a single iteration.
The ceph_submit_write() function claims ownership of the page array on
success (it is later freed when the write concludes). But failures only
redirty/unlock the pages and fail to free the array, making the failure
case in ceph_submit_write() fatal.
Free the page array (and reset locked_pages) in ceph_submit_write()'s
error-handling 'if' block so that the caller's invariant (that the array
does not remain in ceph_wbc) is maintained unconditionally, making
failures in ceph_submit_write() recoverable as originally intended.
Fixes: 1551ec61dc55 ("ceph: introduce ceph_submit_write() method")
Cc: stable@...r.kernel.org
Signed-off-by: Sam Edwards <CFSworks@...il.com>
---
fs/ceph/addr.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 63b75d214210..c3e0b5b429ea 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1470,6 +1470,14 @@ int ceph_submit_write(struct address_space *mapping,
unlock_page(page);
}
+ if (ceph_wbc->from_pool) {
+ mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
+ ceph_wbc->from_pool = false;
+ } else
+ kfree(ceph_wbc->pages);
+ ceph_wbc->pages = NULL;
+ ceph_wbc->locked_pages = 0;
+
ceph_osdc_put_request(req);
return -EIO;
}
--
2.52.0
Powered by blists - more mailing lists