[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkbPSQguHegkzN65==GHuNN9_RPm1FonnF8Bi=BsQDhxng@mail.gmail.com>
Date: Mon, 2 Dec 2024 11:32:30 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Chengming Zhou <chengming.zhou@...ux.dev>
Cc: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, hannes@...xchg.org, nphamcs@...il.com,
usamaarif642@...il.com, ryan.roberts@....com, 21cnbao@...il.com,
akpm@...ux-foundation.org, wajdi.k.feghali@...el.com, vinodh.gopal@...el.com
Subject: Re: [PATCH v1 2/2] mm: zswap: zswap_store_pages() simplifications for batching.
On Wed, Nov 27, 2024 at 11:00 PM Chengming Zhou
<chengming.zhou@...ux.dev> wrote:
>
> On 2024/11/28 06:53, Kanchana P Sridhar wrote:
> > In order to set up zswap_store_pages() to enable a clean batching
> > implementation in [1], this patch implements the following changes:
> >
> > 1) Addition of zswap_alloc_entries() which will allocate zswap entries for
> > all pages in the specified range for the folio, upfront. If this fails,
> > we return an error status to zswap_store().
> >
> > 2) Addition of zswap_compress_pages() that calls zswap_compress() for each
> > page, and returns false if any zswap_compress() fails, so
> > zswap_store_page() can cleanup resources allocated and return an error
> > status to zswap_store().
> >
> > 3) A "store_pages_failed" label that is a catch-all for all failure points
> > in zswap_store_pages(). This facilitates cleaner error handling within
> > zswap_store_pages(), which will become important for IAA compress
> > batching in [1].
> >
> > [1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935
> >
> > Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
> > ---
> > mm/zswap.c | 93 +++++++++++++++++++++++++++++++++++++++++-------------
> > 1 file changed, 71 insertions(+), 22 deletions(-)
> >
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index b09d1023e775..db80c66e2205 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -1409,9 +1409,56 @@ static void shrink_worker(struct work_struct *w)
> > * main API
> > **********************************/
> >
> > +static bool zswap_compress_pages(struct page *pages[],
> > + struct zswap_entry *entries[],
> > + u8 nr_pages,
> > + struct zswap_pool *pool)
> > +{
> > + u8 i;
> > +
> > + for (i = 0; i < nr_pages; ++i) {
> > + if (!zswap_compress(pages[i], entries[i], pool))
> > + return false;
> > + }
> > +
> > + return true;
> > +}
>
> How about introducing a `zswap_compress_folio()` interface which
> can be used by `zswap_store()`?
> ```
> zswap_store()
> nr_pages = folio_nr_pages(folio)
>
> entries = zswap_alloc_entries(nr_pages)
>
> ret = zswap_compress_folio(folio, entries, pool)
>
> // store entries into xarray and LRU list
> ```
>
> And this version `zswap_compress_folio()` is very simple for now:
> ```
> zswap_compress_folio()
> nr_pages = folio_nr_pages(folio)
>
> for (index = 0; index < nr_pages; ++index) {
> struct page *page = folio_page(folio, index);
>
> if (!zswap_compress(page, &entries[index], pool))
> return false;
> }
>
> return true;
> ```
> This can be easily extended to support your "batched" version.
>
> Then the old `zswap_store_page()` could be removed.
>
> The good point is simplicity, that we don't need to slice folio
> into multiple batches, then repeat the common operations for each
> batch, like preparing entries, storing into xarray and LRU list...
+1
Also, I don't like the helpers hiding some of the loops and leaving
others, as Johannes said, please keep all the iteration over pages at
the same function level where possible to make the code clear.
This should not be a separate series too, when I said divide into
chunks I meant leave out the multiple folios batching and focus on
batching pages in a single large folio, not breaking down the series
into multiple ones. Not a big deal tho :)
>
> Thanks.
Powered by blists - more mailing lists