[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkbicqevYuJbTUZBQUo2n_k-HKTixG5TbXGtdbtFJgFmfg@mail.gmail.com>
Date: Wed, 13 Dec 2023 15:24:17 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Nhat Pham <nphamcs@...il.com>, Chris Li <chriscli@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Vitaly Wool <vitaly.wool@...sulko.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 1/5] mm/zswap: reuse dstmem when decompress
On Tue, Dec 12, 2023 at 8:18 PM Chengming Zhou
<zhouchengming@...edance.com> wrote:
>
> In the !zpool_can_sleep_mapped() case such as zsmalloc, we need to first
> copy the entry->handle memory to a temporary memory, which is allocated
> using kmalloc.
>
> Obviously we can reuse the per-compressor dstmem to avoid allocating
> every time, since it's percpu-compressor and protected in mutex.
>
> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
> Reviewed-by: Nhat Pham <nphamcs@...il.com>
> ---
> mm/zswap.c | 29 +++++++++--------------------
> 1 file changed, 9 insertions(+), 20 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 7ee54a3d8281..edb8b45ed5a1 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1772,9 +1772,9 @@ bool zswap_load(struct folio *folio)
> struct zswap_entry *entry;
> struct scatterlist input, output;
> struct crypto_acomp_ctx *acomp_ctx;
> - u8 *src, *dst, *tmp;
> + unsigned int dlen = PAGE_SIZE;
> + u8 *src, *dst;
> struct zpool *zpool;
> - unsigned int dlen;
> bool ret;
>
> VM_WARN_ON_ONCE(!folio_test_locked(folio));
> @@ -1796,27 +1796,18 @@ bool zswap_load(struct folio *folio)
> goto stats;
> }
>
> - zpool = zswap_find_zpool(entry);
> - if (!zpool_can_sleep_mapped(zpool)) {
> - tmp = kmalloc(entry->length, GFP_KERNEL);
> - if (!tmp) {
> - ret = false;
> - goto freeentry;
> - }
> - }
> -
> /* decompress */
> - dlen = PAGE_SIZE;
> - src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
> + acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> + mutex_lock(acomp_ctx->mutex);
>
> + zpool = zswap_find_zpool(entry);
> + src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
> if (!zpool_can_sleep_mapped(zpool)) {
> - memcpy(tmp, src, entry->length);
> - src = tmp;
> + memcpy(acomp_ctx->dstmem, src, entry->length);
> + src = acomp_ctx->dstmem;
I don't like that we are now using acomp_ctx->dstmem and
acomp_ctx->mutex now for purposes other than what the naming suggests.
How about removing these two fields from acomp_ctx, and directly using
zswap_dstmem and zswap_mutex in both the load and store paths, rename
them, and add proper comments above their definitions that they are
for generic percpu buffering on the load and store paths?
Powered by blists - more mailing lists