[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231213-zswap-dstmem-v4-0-f228b059dd89@bytedance.com>
Date: Tue, 26 Dec 2023 15:54:07 +0000
From: Chengming Zhou <zhouchengming@...edance.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Seth Jennings <sjenning@...hat.com>, Johannes Weiner <hannes@...xchg.org>,
Vitaly Wool <vitaly.wool@...sulko.com>, Nhat Pham <nphamcs@...il.com>, Chris Li <chriscli@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>, Dan Streetman <ddstreet@...e.org>
Cc: linux-kernel@...r.kernel.org, Chengming Zhou <zhouchengming@...edance.com>, linux-mm@...ck.org,
Nhat Pham <nphamcs@...il.com>, Yosry Ahmed <yosryahmed@...gle.com>, Chris Li <chrisl@...nel.org>
Subject: [PATCH v4 0/6] mm/zswap: dstmem reuse optimizations and cleanups
Hi everyone,
Changes in v4:
- Collect Reviewed-by and Acked-by tags.
- Fold in the comment fix in zswap_writeback_entry() from Yosry Ahmed.
- Add patch to change per-cpu mutex and dstmem to per-acomp_ctx.
- Just rename crypto_acomp_ctx->dstmem field to buffer.
- Link to v3: https://lore.kernel.org/r/20231213-zswap-dstmem-v3-0-4eac09b94ece@bytedance.com
Changes in v3:
- Collect Reviewed-by tag.
- Drop the __zswap_store() refactoring part.
- Link to v2: https://lore.kernel.org/r/20231213-zswap-dstmem-v2-0-daa5d9ae41a7@bytedance.com
Changes in v2:
- Add more changelog and test data about changing dstmem to one page.
- Reorder patches to put dstmem reusing and __zswap_load() refactoring
together, still refactor after dstmem reusing since we don't want
to handle __zswap_load() failure due to memory allocation failure
in zswap_writeback_entry().
- Append a patch to directly use percpu mutex and buffer in load/store
and refactor out __zswap_store() to simplify zswap_store().
- Link to v1: https://lore.kernel.org/r/20231213-zswap-dstmem-v1-0-896763369d04@bytedance.com
This series is split from [1] to only include zswap dstmem reuse
optimizations and cleanups, the other part of rbtree breakdown will
be deferred to retest after the rbtree converted to xarray.
And the problem this series tries to optimize is that zswap_load()
and zswap_writeback_entry() have to malloc a temporary memory to
support !zpool_can_sleep_mapped(). We can avoid it by reusing the
percpu crypto_acomp_ctx->dstmem, which is also used by zswap_store()
and protected by the same percpu crypto_acomp_ctx->mutex.
[1] https://lore.kernel.org/all/20231206-zswap-lock-optimize-v1-0-e25b059f9c3a@bytedance.com/
Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
---
Chengming Zhou (6):
mm/zswap: change dstmem size to one page
mm/zswap: reuse dstmem when decompress
mm/zswap: refactor out __zswap_load()
mm/zswap: cleanup zswap_load()
mm/zswap: cleanup zswap_writeback_entry()
mm/zswap: change per-cpu mutex and buffer to per-acomp_ctx
include/linux/cpuhotplug.h | 1 -
mm/zswap.c | 246 +++++++++++++--------------------------------
2 files changed, 71 insertions(+), 176 deletions(-)
---
base-commit: 1f242c1964cf9b8d663a2fd72159b296205a8126
change-id: 20231213-zswap-dstmem-d828f563303d
Best regards,
--
Chengming Zhou <zhouchengming@...edance.com>
Powered by blists - more mailing lists