[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_301571E78C8FB8CE9FE3E5857DC174E5150A@qq.com>
Date: Fri, 21 Nov 2025 15:14:54 +0800
From: Yuwen Chen <ywen.chen@...mail.com>
To: senozhatsky@...omium.org
Cc: akpm@...ux-foundation.org,
bgeffon@...gle.com,
licayy@...look.com,
linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
minchan@...nel.org,
richardycc@...gle.com,
ywen.chen@...mail.com
Subject: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
On Fri, 21 Nov 2025 00:21:20 +0900, Sergey Senozhatsky wrote:
> This is a different approach compared to [1]. Instead of
> using blk plug API to batch writeback bios, we just keep
> submitting them and track available of done/idle requests
> (we still use a pool of requests, to put a constraint on
> memory usage). The intuition is that blk plug API is good
> for sequential IO patterns, but zram writeback is more
> likely to use random IO patterns.
> I only did minimal testing so far (in a VM). More testing
> (on real H/W) is needed, any help is highly appreciated.
I conducted a test on an NVMe host. When all requests were random,
this fix was indeed a bit faster than the previous one.
before:
real 0m0.261s
user 0m0.000s
sys 0m0.243s
real 0m0.260s
user 0m0.000s
sys 0m0.244s
real 0m0.259s
user 0m0.000s
sys 0m0.243s
after:
real 0m0.322s
user 0m0.000s
sys 0m0.214s
real 0m0.326s
user 0m0.000s
sys 0m0.206s
real 0m0.325s
user 0m0.000s
sys 0m0.215s
This result is something to be happy about. However, I'm also quite
curious about the test results on devices like UFS, which have
relatively less internal memory.
Powered by blists - more mailing lists