[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260204-b4_zcomp_stream-v1-0-35c06ce1d332@gmail.com>
Date: Wed, 04 Feb 2026 13:48:50 +0000
From: Jihan LIN via B4 Relay <devnull+linjh22s.gmail.com@...nel.org>
To: Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>, Jens Axboe <axboe@...nel.dk>
Cc: linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
Jihan LIN <linjh22s@...il.com>
Subject: [PATCH RFC 0/3] zram: Allow zcomps to manage their own streams
Hi all,
This RFC series introduces a new interface to allow zram compression
backends to manage their own streams, in addition to the existing
per-CPU stream model.
Current zram manages compression contexts via preemptive per-CPU
streams, which strictly limits concurrency to the number of online CPUs.
In contrast, hardware accelerators specialized for page compression
generally process PAGE_SIZE payloads (e.g. 4K) using standard
algorithms. These devices expose the limitations of this model due to
the following features:
- These devices utilize a hardware queue to batch requests. A typical
queue depth (e.g., 256) far exceeds the number of available CPUs.
- These devices are asymmetric. Submission is generally fast and
asynchronous, but completion implies latency.
- Some devices only support compression requests, leaving decompression
to be handled by software.
This exposes the limitations of the current zcomp architecture, which
assumes a model where streams are inherently tied to CPU execution
contexts. This design is not flexible enough to integrate with such
backends, as it forces a one-size-fits-all model that limits the ability
to offload compression operations from CPU.
This series proposes a hybrid approach. While maintaining full backward
compatibility with existing backends, this series introduces a new set
of operations, op->{get, put}_streams(), for backends that wish to
manage their own streams. This allows the backend to handle contentions
internally and dynamically select an execution path for the acquired
streams. A new flag is also introduced to indicate this capability at
runtime. zram_write_page() now prefers streams managed by the backend if
a bio is considered asynchronous.
Some design decisions as follows.
1. The proposed get_stream() does not take gfp_t flags to keep the
interface minimal. By design, backends are fully responsible for
allocation safety.
2. The default per-cpu streams now also imply synchronous path for the
backends.
3. The recompression path currently relies on the default per-cpu
streams. This is a trade-off, since recompression is primarily for
memory saving, and hardware accelerators typically prioritize
throughput over compression ratio.
4. zstrm->lock is restricted to the default per-cpu streams. Backends
must implement internal locking if required. While currently exposed
in struct zstrm, this mutex is an implementation detail of the
default path. Future work may involve making the default stream
locking mechanism opaque to the backends, ensuring they interact only
with the necessary stream interfaces.
Although I do not have access to an Intel IAA accelerator and other
accelerators may not be generally available, this series seems to give
a good start for supporting batched asynchronous operations in zram.
The next step would be to introduce an interface that allows
non-blocking compression submission and validate its real-world
performance once such hardware accelerators become available.
Signed-off-by: Jihan LIN <linjh22s@...il.com>
---
Jihan LIN (3):
zram: Rename zcomp_strm_{init, free}()
zram: Introduce zcomp-managed streams
zram: Use zcomp-managed streams for async write requests
drivers/block/zram/zcomp.c | 37 ++++++++++++++++++++++++++++++-------
drivers/block/zram/zcomp.h | 23 +++++++++++++++++++++--
drivers/block/zram/zram_drv.c | 28 ++++++++++++++++++++++------
3 files changed, 73 insertions(+), 15 deletions(-)
---
base-commit: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
change-id: 20260202-b4_zcomp_stream-7e9f7884e128
Best regards,
--
Jihan LIN <linjh22s@...il.com>
Powered by blists - more mailing lists