[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAH5Ym4hUiVHgHQQA15r2ZRaq8KNg4xLs2Ub5fFs1FaPOcgHZbg@mail.gmail.com>
Date: Wed, 28 Jan 2026 16:10:18 -0800
From: Sam Edwards <cfsworks@...il.com>
To: Viacheslav Dubeyko <Slava.Dubeyko@....com>
Cc: Xiubo Li <xiubli@...hat.com>, "idryomov@...il.com" <idryomov@...il.com>,
Milind Changire <mchangir@...hat.com>, "stable@...r.kernel.org" <stable@...r.kernel.org>,
"ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>, "brauner@...nel.org" <brauner@...nel.org>,
"jlayton@...nel.org" <jlayton@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] ceph: free page array when ceph_submit_write() fails
On Mon, Jan 26, 2026 at 2:51 PM Viacheslav Dubeyko
<Slava.Dubeyko@....com> wrote:
>
> On Sun, 2026-01-25 at 18:27 -0800, Sam Edwards wrote:
> > If `locked_pages` is zero, the page array must not be allocated:
> > ceph_process_folio_batch() uses `locked_pages` to decide when to
> > allocate `pages`, and redundant allocations trigger
> > ceph_allocate_page_array()'s BUG_ON(), resulting in a worker oops (and
> > writeback stall) or even a kernel panic. Consequently, the main loop in
> > ceph_writepages_start() assumes that the lifetime of `pages` is confined
> > to a single iteration.
> >
> > The ceph_submit_write() function claims ownership of the page array on
> > success (it is later freed when the write concludes). But failures only
> > redirty/unlock the pages and fail to free the array, making the failure
> > case in ceph_submit_write() fatal.
> >
> > Free the page array (and reset locked_pages) in ceph_submit_write()'s
> > error-handling 'if' block so that the caller's invariant (that the array
> > does not remain in ceph_wbc) is maintained unconditionally, making
> > failures in ceph_submit_write() recoverable as originally intended.
> >
> > Fixes: 1551ec61dc55 ("ceph: introduce ceph_submit_write() method")
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Sam Edwards <CFSworks@...il.com>
> > ---
> > fs/ceph/addr.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index 63b75d214210..c3e0b5b429ea 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -1470,6 +1470,14 @@ int ceph_submit_write(struct address_space *mapping,
> > unlock_page(page);
> > }
> >
> > + if (ceph_wbc->from_pool) {
> > + mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
> > + ceph_wbc->from_pool = false;
> > + } else
> > + kfree(ceph_wbc->pages);
> > + ceph_wbc->pages = NULL;
> > + ceph_wbc->locked_pages = 0;
> > +
>
>
> I see the completely identical code pattern in two patches:
The second patch only contains that pattern because it is moving it to
a separate function, patch 2 isn't introducing any *new* code.
>
> + if (ceph_wbc->from_pool) {
> + mempool_free(ceph_wbc->pages, ceph_wb_pagevec_pool);
> + ceph_wbc->from_pool = false;
> + } else
> + kfree(ceph_wbc->pages);
> + ceph_wbc->pages = NULL;
> + ceph_wbc->locked_pages = 0;
>
> I believe we need to introduce the inline function that can be reused in two
> places.
Patch 2 is introducing that inline function as requested -- but that
function is not actually used in two places: for now (in this series),
it is only split out for better readability.
These patches are organized like this because of kernel development
norms: bugfixes intended for stable (such as this patch) should
consist of minimal, backport-friendly and correctness-focused changes.
Moving existing code to a new function is a separate change and does
not constitute a bugfix, so it needs to go in its own patch that isn't
Cc: stable.
Cheers,
Sam
>
> Thanks,
> Slava.
>
> > ceph_osdc_put_request(req);
> > return -EIO;
> > }
Powered by blists - more mailing lists