[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f351a9235ec9da785af840beb28db0513aa66ba6.camel@ibm.com>
Date: Mon, 26 Jan 2026 22:51:51 +0000
From: Viacheslav Dubeyko <Slava.Dubeyko@....com>
To: Xiubo Li <xiubli@...hat.com>, "idryomov@...il.com" <idryomov@...il.com>,
"cfsworks@...il.com" <cfsworks@...il.com>
CC: Milind Changire <mchangir@...hat.com>,
"stable@...r.kernel.org"
<stable@...r.kernel.org>,
"ceph-devel@...r.kernel.org"
<ceph-devel@...r.kernel.org>,
"brauner@...nel.org" <brauner@...nel.org>,
"jlayton@...nel.org" <jlayton@...nel.org>,
"linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] ceph: free page array when ceph_submit_write() fails
On Sun, 2026-01-25 at 18:27 -0800, Sam Edwards wrote:
> If `locked_pages` is zero, the page array must not be allocated:
> ceph_process_folio_batch() uses `locked_pages` to decide when to
> allocate `pages`, and redundant allocations trigger
> ceph_allocate_page_array()'s BUG_ON(), resulting in a worker oops (and
> writeback stall) or even a kernel panic. Consequently, the main loop in
> ceph_writepages_start() assumes that the lifetime of `pages` is confined
> to a single iteration.
>
> The ceph_submit_write() function claims ownership of the page array on
> success (it is later freed when the write concludes). But failures only
> redirty/unlock the pages and fail to free the array, making the failure
> case in ceph_submit_write() fatal.
>
> Free the page array (and reset locked_pages) in ceph_submit_write()'s
> error-handling 'if' block so that the caller's invariant (that the array
> does not remain in ceph_wbc) is maintained unconditionally, making
> failures in ceph_submit_write() recoverable as originally intended.
>
> Fixes: 1551ec61dc55 ("ceph: introduce ceph_submit_write() method")
> Cc: stable@...r.kernel.org
> Signed-off-by: Sam Edwards <CFSworks@...il.com>
> ---
> fs/ceph/addr.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 63b75d214210..c3e0b5b429ea 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1470,6 +1470,14 @@ int ceph_submit_write(struct address_space *mapping,
> unlock_page(page);
> }
>
> + if (ceph_wbc->from_pool) {
> + mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
> + ceph_wbc->from_pool = false;
> + } else
> + kfree(ceph_wbc->pages);
> + ceph_wbc->pages = NULL;
> + ceph_wbc->locked_pages = 0;
> +
I see the completely identical code pattern in two patches:
+ if (ceph_wbc->from_pool) {
+ mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
+ ceph_wbc->from_pool = false;
+ } else
+ kfree(ceph_wbc->pages);
+ ceph_wbc->pages = NULL;
+ ceph_wbc->locked_pages = 0;
I believe we need to introduce the inline function that can be reused in two
places.
Thanks,
Slava.
> ceph_osdc_put_request(req);
> return -EIO;
> }
Powered by blists - more mailing lists