[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251120152126.3126298-1-senozhatsky@chromium.org>
Date: Fri, 21 Nov 2025 00:21:20 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Yuwen Chen <ywen.chen@...mail.com>,
Richard Chang <richardycc@...gle.com>
Cc: Brian Geffon <bgeffon@...gle.com>,
Fengyu Lian <licayy@...look.com>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-block@...r.kernel.org,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
RFC
This is a different approach compared to [1]. Instead of
using blk plug API to batch writeback bios, we just keep
submitting them and track available of done/idle requests
(we still use a pool of requests, to put a constraint on
memory usage). The intuition is that blk plug API is good
for sequential IO patterns, but zram writeback is more
likely to use random IO patterns.
I only did minimal testing so far (in a VM). More testing
(on real H/W) is needed, any help is highly appreciated.
[1] https://lore.kernel.org/linux-kernel/20251118073000.1928107-1-senozhatsky@chromium.org
v3 -> v4:
- do not use blk plug API
Sergey Senozhatsky (6):
zram: introduce writeback bio batching
zram: add writeback batch size device attr
zram: take write lock in wb limit store handlers
zram: drop wb_limit_lock
zram: rework bdev block allocation
zram: read slot block idx under slot lock
drivers/block/zram/zram_drv.c | 470 ++++++++++++++++++++++++++--------
drivers/block/zram/zram_drv.h | 2 +-
2 files changed, 364 insertions(+), 108 deletions(-)
--
2.52.0.rc1.455.g30608eb744-goog
Powered by blists - more mailing lists