lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_17B2F3A0BCCBAFD3AE942235EE98F1595707@qq.com>
Date: Wed, 12 Nov 2025 14:57:44 +0800
From: Yuwen Chen <ywen.chen@...mail.com>
To: senozhatsky@...omium.org
Cc: akpm@...ux-foundation.org,
	axboe@...nel.dk,
	bgeffon@...gle.com,
	licayy@...look.com,
	linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	liumartin@...gle.com,
	minchan@...nel.org,
	richardycc@...gle.com,
	ywen.chen@...mail.com
Subject: Re: [PATCH v4] zram: Implement multi-page write-back

On Wed, 12 Nov 2025 14:16:20 +0900, Sergey Senozhatsky wrote:
> The thing that I'm curious about is why does it help for flash storage?
> It's not a spinning disk, where seek times dominate the IO time.

1. For flash-based storage devices such as UFS and NVMe, the Command Queue
mechanism is implemented. Submitting multiple random write requests can
fully utilize the bandwidth of their buses.

2. When we submit consecutive pages separately instead of submitting them
continuously together, the write amplification problem is more likely to
occur. This is because there is an LBA (Logical Block Addressing) table in UFS.

3. Sequential writing has lower requirements for the bus bandwidth.

> My next question is: what problem do you solve with this?  I mean,
> do you use it production (somewhere).  If so, do you have a rough
> number of how many MiBs you writeback and how often, and what's the
> performance impact of this patch.  Again, if you use it in production.

We haven't deployed this commit in the product yet. We're now deploying
it on mobile phones running the Android system. Our ideas are as follows:

1. When an app switches to the background, use process_madvise to swap out
the app's anonymous pages to zram. When the system is idle, cache the app
to the external UFS through the writeback interface.

2. When the system memory is tight and the IO load is low, use the IO load to
improve the memory release speed.

On Wed, 12 Nov 2025 14:18:01 +0900, Sergey Senozhatsky wrote:
> Why do you do this do-while loop here?

When there are no free zram_wb_request structures in the req_pool,
zram_writeback_next_request will return NULL. In this case, you need
to retry once to obtain a zram_wb_request.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ