[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ts32xzxrpxmwf3okxo4bu2ynbgnfe6mehf5h6eibp7dp3r6jp7@4f7oz6tzqwxn>
Date: Fri, 21 Nov 2025 16:58:41 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yuwen Chen <ywen.chen@...mail.com>
Cc: senozhatsky@...omium.org, akpm@...ux-foundation.org,
bgeffon@...gle.com, licayy@...look.com, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, minchan@...nel.org, richardycc@...gle.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching
On (25/11/21 15:44), Yuwen Chen wrote:
> On Fri, 21 Nov 2025 16:32:27 +0900, Sergey Senozhatsky wrote:
> > Is "before" blk-plug based approach and "after" this new approach?
>
> Sorry, I got the before and after mixed up.
No problem. I wonder if the effect is more visible on larger data sets.
0.3 second sounds like a very short write. In my VM tests I couldn't get
more than 2 inflight requests at a time, I guess because decompression
was much slower than IO. I wonder how many inflight requests you had in
your tests.
> In addition, I also have some related questions to consult:
>
> 1. Will page fault exceptions be delayed during the writeback processing?
I don't think our reads are blocked by writes.
> 2. Since the loop device uses a work queue to handle requests, when
> the system load is relatively high, will it have a relatively large
> impact on the latency of page fault exceptions? Is there any way to solve
> this problem?
I think page-fault latency of a written-back page is expected to be
higher, that's a trade-off that we agree on. Off the top of my head,
I don't think we can do anything about it.
Is loop device always used as for writeback targets?
Powered by blists - more mailing lists