lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <buckmtxvdfnpgo56owip3fjqbzraws2wvtomzfkywhczckoqlt@fifgyl5fjpbt>
Date: Fri, 21 Nov 2025 18:12:29 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yuwen Chen <ywen.chen@...mail.com>
Cc: senozhatsky@...omium.org, akpm@...ux-foundation.org, 
	bgeffon@...gle.com, licayy@...look.com, linux-block@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, minchan@...nel.org, richardycc@...gle.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching

On (25/11/21 16:23), Yuwen Chen wrote:
> I used the following code for testing here, and the result was 32.
> 
> code:
> @@ -983,6 +983,7 @@ static int zram_writeback_slots(struct zram *zram,
>         struct zram_pp_slot *pps;
>         int ret = 0, err = 0;
>         u32 index = 0;
> +       int inflight = 0;
>  
>         while ((pps = select_pp_slot(ctl))) {
>                 spin_lock(&zram->wb_limit_lock);
> @@ -993,6 +994,9 @@ static int zram_writeback_slots(struct zram *zram,
>                 }
>                 spin_unlock(&zram->wb_limit_lock);
>  
> +               if (inflight < atomic_read(&wb_ctl->num_inflight))
> +                       inflight = atomic_read(&wb_ctl->num_inflight);
> +
>                 while (!req) {
>                         req = zram_select_idle_req(wb_ctl);
>                         if (req)
> @@ -1074,6 +1078,7 @@ next:
>                         ret = err;
>         }
>  
> +       pr_err("%s: inflight max: %d\n", __func__, inflight);
>         return ret;
>  }

I think this will always give you 32 (or you current batch size limit),
just because the way it works - we first deplete all ->idle (reaching
max ->inflight) and only then complete finished requests (dropping
->inflight).

I had a version of the patch that had different main loop. It would
always first complete finished requests.  I think this one will give
accurate ->inflight number.

---
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index ab0785878069..398609e9d061 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -999,13 +999,6 @@ static int zram_writeback_slots(struct zram *zram,
 		}
 
 		while (!req) {
-			req = zram_select_idle_req(wb_ctl);
-			if (req)
-				break;
-
-			wait_event(wb_ctl->done_wait,
-				   !list_empty(&wb_ctl->done_reqs));
-
 			err = zram_complete_done_reqs(zram, wb_ctl);
 			/*
 			 * BIO errors are not fatal, we continue and simply
@@ -1017,6 +1010,13 @@ static int zram_writeback_slots(struct zram *zram,
 			 */
 			if (err)
 				ret = err;
+
+			req = zram_select_idle_req(wb_ctl);
+			if (req)
+				break;
+
+			wait_event(wb_ctl->done_wait,
+				   !list_empty(&wb_ctl->done_reqs));
 		}
 
 		if (blk_idx == INVALID_BDEV_BLOCK) {

---

> > I think page-fault latency of a written-back page is expected to be
> > higher, that's a trade-off that we agree on.  Off the top of my head,
> > I don't think we can do anything about it.
> >
> > Is loop device always used as for writeback targets?
> 
> On the Android platform, currently only the loop device is supported as
> the backend for writeback, possibly for security reasons. I noticed that
> EROFS has implemented a CONFIG_EROFS_FS_BACKED_BY_FILE to reduce this
> latency. I think ZRAM might also be able to do this.

I see.  Do you use S/W or H/W compression?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ