lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <858e807b-6fea-57e0-f077-f8d24f412fae@huaweicloud.com>
Date: Wed, 15 Oct 2025 10:28:47 +0800
From: Li Nan <linan666@...weicloud.com>
To: Xiao Ni <xni@...hat.com>, linan666@...weicloud.com
Cc: corbet@....net, song@...nel.org, yukuai3@...wei.com, hare@...e.de,
 linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
 linux-raid@...r.kernel.org, martin.petersen@...cle.com,
 yangerkun@...wei.com, yi.zhang@...wei.com
Subject: Re: [PATCH v6] md: allow configuring logical block size



在 2025/9/28 9:46, Xiao Ni 写道:

>> +static struct md_sysfs_entry md_logical_block_size =
>> +__ATTR(logical_block_size, S_IRUGO|S_IWUSR, lbs_show, lbs_store);
>>
>>   static struct attribute *md_default_attrs[] = {
>>          &md_level.attr,
>> @@ -5933,6 +5995,7 @@ static struct attribute *md_redundancy_attrs[] = {
>>          &md_scan_mode.attr,
>>          &md_last_scan_mode.attr,
>>          &md_mismatches.attr,
>> +       &md_logical_block_size.attr,
> 
> Hi
> 
> I just saw your v5 replied email and noticed this place. The logcial
> block size doesn't have relationship with sync action, right?
> md_redundancy_attrs is used for sync attributes. So is it better to
> put this into md_default_attrs?
> 
> 

Hi, Thansks for your review.

Agree, I will move it to md_default_attrs in the next version.


>>          &md_sync_min.attr,
>>          &md_sync_max.attr,
>>          &md_sync_io_depth.attr,
>> @@ -6052,6 +6115,17 @@ int mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim,
>>                          return -EINVAL;
>>          }
>>
>> +       /*
>> +        * Before RAID adding folio support, the logical_block_size
>> +        * should be smaller than the page size.
>> +        */
>> +       if (lim->logical_block_size > PAGE_SIZE) {
>> +               pr_err("%s: logical_block_size must not larger than PAGE_SIZE\n",
>> +                       mdname(mddev));
>> +               return -EINVAL;
>> +       }
>> +       mddev->logical_block_size = lim->logical_block_size;
>> +
>>          return 0;
>>   }
>>   EXPORT_SYMBOL_GPL(mddev_stack_rdev_limits);
>> @@ -6690,6 +6764,7 @@ static void md_clean(struct mddev *mddev)
>>          mddev->chunk_sectors = 0;
>>          mddev->ctime = mddev->utime = 0;
>>          mddev->layout = 0;
>> +       mddev->logical_block_size = 0;
>>          mddev->max_disks = 0;
>>          mddev->events = 0;
>>          mddev->can_decrease_events = 0;
>> diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
>> index f1d8811a542a..705889a09fc1 100644
>> --- a/drivers/md/raid0.c
>> +++ b/drivers/md/raid0.c
>> @@ -382,6 +382,7 @@ static int raid0_set_limits(struct mddev *mddev)
>>          md_init_stacking_limits(&lim);
>>          lim.max_hw_sectors = mddev->chunk_sectors;
>>          lim.max_write_zeroes_sectors = mddev->chunk_sectors;
>> +       lim.logical_block_size = mddev->logical_block_size;
> 
> raid0 creates zone stripes first based on the member disk's LBS. So
> it's not right to change the logical block size here?
> 
> Best Regards
> Xiao

On further check, it is feasible to move raid0_set_limits before
create_strip_zones. I will fix it in the next version. Thank you for your
detailed review.

>>          lim.io_min = mddev->chunk_sectors << 9;
>>          lim.io_opt = lim.io_min * mddev->raid_disks;
>>          lim.chunk_sectors = mddev->chunk_sectors;
>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>> index d0f6afd2f988..de0c843067dc 100644
>> --- a/drivers/md/raid1.c
>> +++ b/drivers/md/raid1.c
>> @@ -3223,6 +3223,7 @@ static int raid1_set_limits(struct mddev *mddev)
>>
>>          md_init_stacking_limits(&lim);
>>          lim.max_write_zeroes_sectors = 0;
>> +       lim.logical_block_size = mddev->logical_block_size;
>>          lim.features |= BLK_FEAT_ATOMIC_WRITES;
>>          err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
>>          if (err)
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index c3cfbb0347e7..68c8148386b0 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -4005,6 +4005,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
>>
>>          md_init_stacking_limits(&lim);
>>          lim.max_write_zeroes_sectors = 0;
>> +       lim.logical_block_size = mddev->logical_block_size;
>>          lim.io_min = mddev->chunk_sectors << 9;
>>          lim.chunk_sectors = mddev->chunk_sectors;
>>          lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index c32ffd9cffce..ff0daa22df65 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -7747,6 +7747,7 @@ static int raid5_set_limits(struct mddev *mddev)
>>          stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9));
>>
>>          md_init_stacking_limits(&lim);
>> +       lim.logical_block_size = mddev->logical_block_size;
>>          lim.io_min = mddev->chunk_sectors << 9;
>>          lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded);
>>          lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
>> --
>> 2.39.2
>>
> 
> 
> .

-- 
Thanks,
Nan


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ