[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_865DD78A73BC3C9CAFCBAEBE222B6EA5F107@qq.com>
Date: Fri, 21 Nov 2025 16:23:58 +0800
From: Yuwen Chen <ywen.chen@...mail.com>
To: senozhatsky@...omium.org
Cc: akpm@...ux-foundation.org,
bgeffon@...gle.com,
licayy@...look.com,
linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
minchan@...nel.org,
richardycc@...gle.com,
ywen.chen@...mail.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
On Fri, 21 Nov 2025 16:58:41 +0900, Sergey Senozhatsky wrote:
> No problem. I wonder if the effect is more visible on larger data sets.
> 0.3 second sounds like a very short write. In my VM tests I couldn't get
> more than 2 inflight requests at a time, I guess because decompression
> was much slower than IO. I wonder how many inflight requests you had in
> your tests.
I used the following code for testing here, and the result was 32.
code:
@@ -983,6 +983,7 @@ static int zram_writeback_slots(struct zram *zram,
struct zram_pp_slot *pps;
int ret = 0, err = 0;
u32 index = 0;
+ int inflight = 0;
while ((pps = select_pp_slot(ctl))) {
spin_lock(&zram->wb_limit_lock);
@@ -993,6 +994,9 @@ static int zram_writeback_slots(struct zram *zram,
}
spin_unlock(&zram->wb_limit_lock);
+ if (inflight < atomic_read(&wb_ctl->num_inflight))
+ inflight = atomic_read(&wb_ctl->num_inflight);
+
while (!req) {
req = zram_select_idle_req(wb_ctl);
if (req)
@@ -1074,6 +1078,7 @@ next:
ret = err;
}
+ pr_err("%s: inflight max: %d\n", __func__, inflight);
return ret;
}
log:
[3741949.842927] zram: zram_writeback_slots: inflight max: 32
Changing ZRAM_WB_REQ_CNT to 64 didn't shorten the overall time.
> I think page-fault latency of a written-back page is expected to be
> higher, that's a trade-off that we agree on. Off the top of my head,
> I don't think we can do anything about it.
>
> Is loop device always used as for writeback targets?
On the Android platform, currently only the loop device is supported as
the backend for writeback, possibly for security reasons. I noticed that
EROFS has implemented a CONFIG_EROFS_FS_BACKED_BY_FILE to reduce this
latency. I think ZRAM might also be able to do this.
Powered by blists - more mailing lists