lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5b0fd2a0-dffc-4f51-bdff-746e9bd611bd@oracle.com>
Date: Tue, 2 Sep 2025 13:25:39 +0100
From: John Garry <john.g.garry@...cle.com>
To: Zhang Yi <yi.zhang@...weicloud.com>, linux-block@...r.kernel.org,
        linux-raid@...r.kernel.org, drbd-dev@...ts.linbit.com
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org, hch@....de,
        martin.petersen@...cle.com, axboe@...nel.dk, yi.zhang@...wei.com,
        yukuai3@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH 1/2] md: init queue_limits->max_hw_wzeroes_unmap_sectors
 parameter

On 25/08/2025 09:33, Zhang Yi wrote:
> From: Zhang Yi <yi.zhang@...wei.com>
> 
> The parameter max_hw_wzeroes_unmap_sectors in queue_limits should be
> equal to max_write_zeroes_sectors if it is set to a non-zero value.
> However, the stacked md drivers call md_init_stacking_limits() to
> initialize this parameter to UINT_MAX but only adjust
> max_write_zeroes_sectors when setting limits. Therefore, this
> discrepancy triggers a value check failure in blk_validate_limits().
> 
> Fix this failure by explicitly setting max_hw_wzeroes_unmap_sectors to
> zero.
> 
> Fixes: 0c40d7cb5ef3 ("block: introduce max_{hw|user}_wzeroes_unmap_sectors to queue limits")
> Reported-by: John Garry <john.g.garry@...cle.com>
> Closes: https://lore.kernel.org/linux-block/803a2183-a0bb-4b7a-92f1-afc5097630d2@oracle.com/
> Signed-off-by: Zhang Yi <yi.zhang@...wei.com>

Tested-by: John Garry <john.g.garry@...cle.com> # raid 0/1/10

> diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
> index f1d8811a542a..419139ad7663 100644
> --- a/drivers/md/raid0.c
> +++ b/drivers/md/raid0.c
> @@ -382,6 +382,7 @@ static int raid0_set_limits(struct mddev *mddev)
>   	md_init_stacking_limits(&lim);
>   	lim.max_hw_sectors = mddev->chunk_sectors;
>   	lim.max_write_zeroes_sectors = mddev->chunk_sectors;
> +	lim.max_hw_wzeroes_unmap_sectors = mddev->chunk_sectors;
>   	lim.io_min = mddev->chunk_sectors << 9;
>   	lim.io_opt = lim.io_min * mddev->raid_disks;
>   	lim.chunk_sectors = mddev->chunk_sectors;
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 408c26398321..35c6498b4917 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -3211,6 +3211,7 @@ static int raid1_set_limits(struct mddev *mddev)
>   
>   	md_init_stacking_limits(&lim);
>   	lim.max_write_zeroes_sectors = 0;
> +	lim.max_hw_wzeroes_unmap_sectors = 0;

It would be better if we documented why we cannot support this on 
raid1/10, yet we can on raid0.

I am looking through the history of why max_write_zeroes_sectors is set 
to zero. I have gone as far back as 5026d7a9b, and this tells us that 
the retry mechanism for WRITE SAME causes an issue where mirrors are 
offlined (and so we disabled the support); and this was simply copied 
for write zeroes in 3deff1a70.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ