lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tencent_D7ED79431EC0D75957539121B7CC7897EB06@qq.com>
Date: Mon, 24 Nov 2025 10:15:49 +0800
From: Yuwen Chen <ywen.chen@...mail.com>
To: senozhatsky@...omium.org
Cc: akpm@...ux-foundation.org,
	bgeffon@...gle.com,
	licayy@...look.com,
	linux-block@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	minchan@...nel.org,
	richardycc@...gle.com,
	ywen.chen@...mail.com
Subject: 

Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching

On Fri, 21 Nov 2025 18:12:29 +0900, Sergey Senozhatsky wrote:
> I had a version of the patch that had different main loop. It would
> always first complete finished requests.  I think this one will give
> accurate ->inflight number.

Using the following patch, the final measured result is 32. Using 32
here might be a relatively reasonable value.

 /* XXX: should be a per-device sysfs attr */
-#define ZRAM_WB_REQ_CNT 32
+#define ZRAM_WB_REQ_CNT 64
 
 static struct zram_wb_ctl *init_wb_ctl(void)
 {
@@ -983,6 +983,7 @@ static int zram_writeback_slots(struct zram *zram,
        struct zram_pp_slot *pps;
        int ret = 0, err = 0;
        u32 index = 0;
+       int inflight = 0;
 
        while ((pps = select_pp_slot(ctl))) {
                spin_lock(&zram->wb_limit_lock);
@@ -993,14 +994,10 @@ static int zram_writeback_slots(struct zram *zram,
                }
                spin_unlock(&zram->wb_limit_lock);
 
-               while (!req) {
-                       req = zram_select_idle_req(wb_ctl);
-                       if (req)
-                               break;
-
-                       wait_event(wb_ctl->done_wait,
-                                  !list_empty(&wb_ctl->done_reqs));
+               if (inflight < atomic_read(&wb_ctl->num_inflight))
+                       inflight = atomic_read(&wb_ctl->num_inflight);
 
+               while (!req) {
                        err = zram_complete_done_reqs(zram, wb_ctl);
                        /*
                         * BIO errors are not fatal, we continue and simply
@@ -1012,6 +1009,13 @@ static int zram_writeback_slots(struct zram *zram,
                         */
                        if (err)
                                ret = err;
+
+                       req = zram_select_idle_req(wb_ctl);
+                       if (req)
+                               break;
+
+                       wait_event(wb_ctl->done_wait,
+                                  !list_empty(&wb_ctl->done_reqs));
                }
 
                if (!blk_idx) {
@@ -1074,6 +1078,7 @@ next:
                        ret = err;
        }
 
+       pr_err("%s: inflight max: %d\n", __func__, inflight);
        return ret;


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ