[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c9a0f00b-3aeb-467a-8771-a4ebb57fbba0@linux.dev>
Date: Thu, 28 Nov 2024 15:00:33 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, hannes@...xchg.org,
yosryahmed@...gle.com, nphamcs@...il.com, usamaarif642@...il.com,
ryan.roberts@....com, 21cnbao@...il.com, akpm@...ux-foundation.org
Cc: wajdi.k.feghali@...el.com, vinodh.gopal@...el.com
Subject: Re: [PATCH v1 2/2] mm: zswap: zswap_store_pages() simplifications for
batching.
On 2024/11/28 06:53, Kanchana P Sridhar wrote:
> In order to set up zswap_store_pages() to enable a clean batching
> implementation in [1], this patch implements the following changes:
>
> 1) Addition of zswap_alloc_entries() which will allocate zswap entries for
> all pages in the specified range for the folio, upfront. If this fails,
> we return an error status to zswap_store().
>
> 2) Addition of zswap_compress_pages() that calls zswap_compress() for each
> page, and returns false if any zswap_compress() fails, so
> zswap_store_page() can cleanup resources allocated and return an error
> status to zswap_store().
>
> 3) A "store_pages_failed" label that is a catch-all for all failure points
> in zswap_store_pages(). This facilitates cleaner error handling within
> zswap_store_pages(), which will become important for IAA compress
> batching in [1].
>
> [1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935
>
> Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
> ---
> mm/zswap.c | 93 +++++++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 71 insertions(+), 22 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index b09d1023e775..db80c66e2205 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1409,9 +1409,56 @@ static void shrink_worker(struct work_struct *w)
> * main API
> **********************************/
>
> +static bool zswap_compress_pages(struct page *pages[],
> + struct zswap_entry *entries[],
> + u8 nr_pages,
> + struct zswap_pool *pool)
> +{
> + u8 i;
> +
> + for (i = 0; i < nr_pages; ++i) {
> + if (!zswap_compress(pages[i], entries[i], pool))
> + return false;
> + }
> +
> + return true;
> +}
How about introducing a `zswap_compress_folio()` interface which
can be used by `zswap_store()`?
```
zswap_store()
nr_pages = folio_nr_pages(folio)
entries = zswap_alloc_entries(nr_pages)
ret = zswap_compress_folio(folio, entries, pool)
// store entries into xarray and LRU list
```
And this version `zswap_compress_folio()` is very simple for now:
```
zswap_compress_folio()
nr_pages = folio_nr_pages(folio)
for (index = 0; index < nr_pages; ++index) {
struct page *page = folio_page(folio, index);
if (!zswap_compress(page, &entries[index], pool))
return false;
}
return true;
```
This can be easily extended to support your "batched" version.
Then the old `zswap_store_page()` could be removed.
The good point is simplicity, that we don't need to slice folio
into multiple batches, then repeat the common operations for each
batch, like preparing entries, storing into xarray and LRU list...
Thanks.
Powered by blists - more mailing lists