[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7f401205-725e-9a83-f683-21a67500cdcd@huaweicloud.com>
Date: Mon, 25 Aug 2025 16:52:14 +0800
From: Li Nan <linan666@...weicloud.com>
To: Zhang Yi <yi.zhang@...weicloud.com>, linux-block@...r.kernel.org,
linux-raid@...r.kernel.org, drbd-dev@...ts.linbit.com
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
john.g.garry@...cle.com, hch@....de, martin.petersen@...cle.com,
axboe@...nel.dk, yi.zhang@...wei.com, yukuai3@...wei.com,
yangerkun@...wei.com
Subject: Re: [PATCH 1/2] md: init queue_limits->max_hw_wzeroes_unmap_sectors
parameter
在 2025/8/25 16:33, Zhang Yi 写道:
> From: Zhang Yi <yi.zhang@...wei.com>
>
> The parameter max_hw_wzeroes_unmap_sectors in queue_limits should be
> equal to max_write_zeroes_sectors if it is set to a non-zero value.
> However, the stacked md drivers call md_init_stacking_limits() to
> initialize this parameter to UINT_MAX but only adjust
> max_write_zeroes_sectors when setting limits. Therefore, this
> discrepancy triggers a value check failure in blk_validate_limits().
>
> Fix this failure by explicitly setting max_hw_wzeroes_unmap_sectors to
> zero.
>
> Fixes: 0c40d7cb5ef3 ("block: introduce max_{hw|user}_wzeroes_unmap_sectors to queue limits")
> Reported-by: John Garry <john.g.garry@...cle.com>
> Closes: https://lore.kernel.org/linux-block/803a2183-a0bb-4b7a-92f1-afc5097630d2@oracle.com/
> Signed-off-by: Zhang Yi <yi.zhang@...wei.com>
> ---
> drivers/md/md-linear.c | 1 +
> drivers/md/raid0.c | 1 +
> drivers/md/raid1.c | 1 +
> drivers/md/raid10.c | 1 +
> drivers/md/raid5.c | 1 +
> 5 files changed, 5 insertions(+)
>
> diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
> index 5d9b08115375..3e1f165c2d20 100644
> --- a/drivers/md/md-linear.c
> +++ b/drivers/md/md-linear.c
> @@ -73,6 +73,7 @@ static int linear_set_limits(struct mddev *mddev)
> md_init_stacking_limits(&lim);
> lim.max_hw_sectors = mddev->chunk_sectors;
> lim.max_write_zeroes_sectors = mddev->chunk_sectors;
> + lim.max_hw_wzeroes_unmap_sectors = mddev->chunk_sectors;
> lim.io_min = mddev->chunk_sectors << 9;
> err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
> if (err)
> diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
> index f1d8811a542a..419139ad7663 100644
> --- a/drivers/md/raid0.c
> +++ b/drivers/md/raid0.c
> @@ -382,6 +382,7 @@ static int raid0_set_limits(struct mddev *mddev)
> md_init_stacking_limits(&lim);
> lim.max_hw_sectors = mddev->chunk_sectors;
> lim.max_write_zeroes_sectors = mddev->chunk_sectors;
> + lim.max_hw_wzeroes_unmap_sectors = mddev->chunk_sectors;
> lim.io_min = mddev->chunk_sectors << 9;
> lim.io_opt = lim.io_min * mddev->raid_disks;
> lim.chunk_sectors = mddev->chunk_sectors;
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 408c26398321..35c6498b4917 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -3211,6 +3211,7 @@ static int raid1_set_limits(struct mddev *mddev)
>
> md_init_stacking_limits(&lim);
> lim.max_write_zeroes_sectors = 0;
> + lim.max_hw_wzeroes_unmap_sectors = 0;
> lim.features |= BLK_FEAT_ATOMIC_WRITES;
> err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
> if (err)
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index b60c30bfb6c7..9832eefb2f15 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -4008,6 +4008,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
>
> md_init_stacking_limits(&lim);
> lim.max_write_zeroes_sectors = 0;
> + lim.max_hw_wzeroes_unmap_sectors = 0;
> lim.io_min = mddev->chunk_sectors << 9;
> lim.chunk_sectors = mddev->chunk_sectors;
> lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 023649fe2476..e385ef1355e8 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -7732,6 +7732,7 @@ static int raid5_set_limits(struct mddev *mddev)
> lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
> lim.discard_granularity = stripe;
> lim.max_write_zeroes_sectors = 0;
> + lim.max_hw_wzeroes_unmap_sectors = 0;
> mddev_stack_rdev_limits(mddev, &lim, 0);
> rdev_for_each(rdev, mddev)
> queue_limits_stack_bdev(&lim, rdev->bdev, rdev->new_data_offset,
LGTM, feel free to add
Reviewed-by: Li Nan <linan122@...wei.com>
--
Thanks,
Nan
Powered by blists - more mailing lists