lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <sz4brk7ixwzud4hzcgw65au6eto2y55thcku5ouo7x6ieifvlm@t3svlkcwcjb3>
Date: Thu, 13 Nov 2025 16:37:07 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yuwen Chen <ywen.chen@...mail.com>
Cc: axboe@...nel.dk, akpm@...ux-foundation.org, bgeffon@...gle.com, 
	licayy@...look.com, linux-block@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, liumartin@...gle.com, minchan@...nel.org, richardycc@...gle.com, 
	senozhatsky@...omium.org
Subject: Re: [PATCH v4] zram: Implement multi-page write-back

On (25/11/06 09:49), Yuwen Chen wrote:
[..]
> +static int zram_writeback_complete(struct zram *zram, struct zram_wb_request *req)
> +{
> +	u32 index = 0;
> +	int err;
>  
> -	while ((pps = select_pp_slot(ctl))) {
> +	if (!test_and_clear_bit(ZRAM_WB_REQUEST_COMPLETED, &req->flags))
> +		return 0;
> +
> +	err = blk_status_to_errno(req->bio.bi_status);
> +	if (err)
> +		return err;
> +
> +	index = req->pps->index;
> +	atomic64_inc(&zram->stats.bd_writes);
> +	zram_slot_lock(zram, index);
> +	/*
> +	 * Same as above, we release slot lock during writeback so
> +	 * slot can change under us: slot_free() or slot_free() and
> +	 * reallocation (zram_write_page()). In both cases slot loses
> +	 * ZRAM_PP_SLOT flag. No concurrent post-processing can set
> +	 * ZRAM_PP_SLOT on such slots until current post-processing
> +	 * finishes.
> +	 */
> +	if (!zram_test_flag(zram, index, ZRAM_PP_SLOT))
> +		goto next;
> +
> +	zram_free_page(zram, index);
> +	zram_set_flag(zram, index, ZRAM_WB);
> +	zram_set_handle(zram, index, req->blk_idx);
> +	req->blk_idx = 0;
> +	atomic64_inc(&zram->stats.pages_stored);
> +	spin_lock(&zram->wb_limit_lock);
> +	if (zram->wb_limit_enable && zram->bd_wb_limit > 0)
> +		zram->bd_wb_limit -=  1UL << (PAGE_SHIFT - 12);
> +	spin_unlock(&zram->wb_limit_lock);

This should be done before the submission, not after the completion.
Otherwise we can significantly overshoot the wb_limit.  And we simply
need to rollback wb_limit adjustment for failed bio requests.

Will incorporate this into next iteration of the patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ