[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <90c06ff6-009f-430a-9b81-ca795e3115b0@suse.de>
Date: Fri, 21 Nov 2025 08:40:23 +0100
From: Hannes Reinecke <hare@...e.de>
To: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>, Minchan Kim <minchan@...nel.org>,
Yuwen Chen <ywen.chen@...mail.com>, Richard Chang <richardycc@...gle.com>
Cc: Brian Geffon <bgeffon@...gle.com>, Fengyu Lian <licayy@...look.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-block@...r.kernel.org, Minchan Kim <minchan@...gle.com>
Subject: Re: [RFC PATCHv5 1/6] zram: introduce writeback bio batching
On 11/20/25 16:21, Sergey Senozhatsky wrote:
> Currently, zram writeback supports only a single bio writeback
> operation, waiting for bio completion before post-processing
> next pp-slot. This works, in general, but has certain throughput
> limitations. Introduce batched (multiple) bio writeback support
> to take advantage of parallel requests processing and better
> requests scheduling.
>
> For the time being the writeback batch size (maximum number of
> in-flight bio requests) is set to 32 for all devices. A follow
> up patch adds a writeback_batch_size device attribute, so the
> batch size becomes run-time configurable.
>
> Signed-off-by: Sergey Senozhatsky <senozhatsky@...omium.org>
> Co-developed-by: Yuwen Chen <ywen.chen@...mail.com>
> Co-developed-by: Richard Chang <richardycc@...gle.com>
> Suggested-by: Minchan Kim <minchan@...gle.com>
> ---
> drivers/block/zram/zram_drv.c | 366 +++++++++++++++++++++++++++-------
> 1 file changed, 298 insertions(+), 68 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index a43074657531..37c1416ac902 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
[ .. ]
> +static int zram_complete_done_reqs(struct zram *zram,
> + struct zram_wb_ctl *wb_ctl)
> +{
> + struct zram_wb_req *req;
> + unsigned long flags;
> int ret = 0, err;
> - u32 index;
>
> - page = alloc_page(GFP_KERNEL);
> - if (!page)
> - return -ENOMEM;
> + while (1) {
> + spin_lock_irqsave(&wb_ctl->done_lock, flags);
> + req = list_first_entry_or_null(&wb_ctl->done_reqs,
> + struct zram_wb_req, entry);
> + if (req)
> + list_del(&req->entry);
> + spin_unlock_irqrestore(&wb_ctl->done_lock, flags);
> +
> + if (!req)
> + break;
> +
> + err = zram_writeback_complete(zram, req);
> + if (err)
> + ret = err;
> +
> + atomic_dec(&wb_ctl->num_inflight);
> + release_pp_slot(zram, req->pps);
> + req->pps = NULL;
> +
> + list_add(&req->entry, &wb_ctl->idle_reqs);
Shouldn't this be locked?
> + }
> +
> + return ret;
> +}
> +
> +static struct zram_wb_req *zram_select_idle_req(struct zram_wb_ctl *wb_ctl)
> +{
> + struct zram_wb_req *req;
> +
> + req = list_first_entry_or_null(&wb_ctl->idle_reqs,
> + struct zram_wb_req, entry);
> + if (req)
> + list_del(&req->entry);
See above. I think you need to lock this to avoid someone stepping in
here an modify the element under you.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@...e.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
Powered by blists - more mailing lists