lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tencent_B2DC37E3A2AED0E7F179365FCB5D82455B08@qq.com>
Date: Mon, 10 Nov 2025 15:16:21 +0800
From: Yuwen Chen <ywen.chen@...mail.com>
To: senozhatsky@...omium.org
Cc: akpm@...ux-foundation.org,
	axboe@...nel.dk,
	bgeffon@...gle.com,
	licayy@...look.com,
	linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	liumartin@...gle.com,
	minchan@...nel.org,
	richardycc@...gle.com,
	ywen.chen@...mail.com
Subject: Re: [PATCH v4] zram: Implement multi-page write-back

On 10 Nov 2025 13:49:26 +0900, Sergey Senozhatsky wrote:
> As a side note:
> You almost never do sequential writes to the backing device. The
> thing is, e.g. when zram is used as swap, page faults happen randomly
> and free up (slot-free) random page-size chunks (so random bits in
> zram->bitmap become clear), which then get overwritten (zram simply
> picks the first available bit from zram->bitmap) during next writeback.
> There is nothing sequential about that, in systems with sufficiently
> large uptime and sufficiently frequent writeback/readback events
> writeback bitmap becomes sparse, which results in random IO, so your
> test tests an ideal case that almost never happens in practice.

Thank you very much for your reply.
As you mentioned, the current test data was measured under the condition
that all writes were sequential writes. In a normal user environment,
there are a large number of random writes. However, the multiple
concurrent submissions implemented in this submission still have performance
advantages for storage devices. I artificially created the worst - case
scenario (all writes are random writes) with the following code:

for (int i = 0; i < nr_pages; i++)
    alloc_block_bdev(zram);

for (int i = 0; i < nr_pages; i += 2)
    free_block_bdev(zram, i);

On the physical machine, the measured data is as follows:
before modification:
real	0m0.624s
user	0m0.000s
sys	0m0.347s

real	0m0.663s
user	0m0.001s
sys	0m0.354s

real	0m0.635s
user	0m0.000s
sys	0m0.335s

after modification:
real	0m0.340s
user	0m0.000s
sys	0m0.239s

real	0m0.326s
user	0m0.000s
sys	0m0.230s

real	0m0.313s
user	0m0.000s
sys	0m0.223s

The test script is as follows:
# mknod /dev/loop45 b 7 45
# losetup /dev/loop45 ./zram_writeback.img
# echo "/dev/loop45" > /sys/block/zram0/backing_dev
# echo "1024000000" > /sys/block/zram0/disksize
# dd if=/dev/random of=/dev/zram0
# time echo "page_indexes=1-100000" > /sys/block/zram0/writeback

Thank you again for your reply.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ