lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e08c3aa-7bd5-f5dd-3d38-b93fb772ea56@huaweicloud.com>
Date:   Sun, 10 Sep 2023 10:24:40 +0800
From:   Yu Kuai <yukuai1@...weicloud.com>
To:     Song Liu <song@...nel.org>, Yu Kuai <yukuai1@...weicloud.com>
Cc:     Li Nan <linan122@...wei.com>, linux-raid@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
        houtao1@...wei.com, yangerkun@...wei.com,
        "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH] md/raid1: only update stack limits with the device in use

Hi,

在 2023/09/09 4:42, Song Liu 写道:
> On Wed, Sep 6, 2023 at 11:30 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2023/09/06 17:37, Li Nan 写道:
>>> Spare device affects array stack limits is unreasonable. For example,
>>> create a raid1 with two 512 byte devices, the logical_block_size of array
>>> will be 512. But after add a 4k devcie as spare, logical_block_size of
>>> array will change as follows.
>>>
>>>     mdadm -C /dev/md0 -n 2 -l 10 /dev/sd[ab]   //sd[ab] is 512
>>>     //logical_block_size of md0: 512
>>>
>>>     mdadm --add /dev/md0 /dev/sdc                      //sdc is 4k
>>>     //logical_block_size of md0: 512
>>>
>>>     mdadm -S /dev/md0
>>>     mdadm -A /dev/md0 /dev/sd[ab]
>>>     //logical_block_size of md0: 4k
>>>
>>> This will confuse users, as nothing has been changed, why did the
>>> logical_block_size of array change?
>>>
>>> Now, only update logical_block_size of array with the device in use.
>>>
>>> Signed-off-by: Li Nan <linan122@...wei.com>
>>> ---
>>>    drivers/md/raid1.c | 19 ++++++++-----------
>>>    1 file changed, 8 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>>> index 95504612b7e2..d75c5dd89e86 100644
>>> --- a/drivers/md/raid1.c
>>> +++ b/drivers/md/raid1.c
>>> @@ -3140,19 +3140,16 @@ static int raid1_run(struct mddev *mddev)
>>
>> I'm not sure about this behaviour, 'logical_block_size' can be
>> increased while adding new underlying disk, the key point is not when
>> to increase 'logical_block_size'. If there is a mounted fs, or
>> partition in the array, I think the array will be corrupted.
> 
> How common is such fs/partition corruption? I think some fs and partition
> table can work properly with 512=>4096 change?

For fs, that should depend on fs bs that is usually set in mkfs, if bs
is less than 4096, then such fs can't be mounted.

For partition, that is much worse, start sector and end sector will stay
the same, while sector size is changed. And 4096 -> 512 change is the
same.

Thanks,
Kuai

> 
> Thanks,
> Song
> 
>>
>> Perhaps once that array is started, logical_block_size should not be
>> changed anymore, this will require 'logical_block_size' to be metadate
>> inside raid superblock. And the array should deny any new disk with
>> bigger logical_block_size.
>>
>> Thanks,
>> Kuai
>>
>>
>>>        if (mddev->queue)
>>>                blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
>>>
>>> -     rdev_for_each(rdev, mddev) {
>>> -             if (!mddev->gendisk)
>>> -                     continue;
>>> -             disk_stack_limits(mddev->gendisk, rdev->bdev,
>>> -                               rdev->data_offset << 9);
>>> -     }
>>> -
>>>        mddev->degraded = 0;
>>> -     for (i = 0; i < conf->raid_disks; i++)
>>> -             if (conf->mirrors[i].rdev == NULL ||
>>> -                 !test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
>>> -                 test_bit(Faulty, &conf->mirrors[i].rdev->flags))
>>> +     for (i = 0; i < conf->raid_disks; i++) {
>>> +             rdev = conf->mirrors[i].rdev;
>>> +             if (rdev && mddev->gendisk)
>>> +                     disk_stack_limits(mddev->gendisk, rdev->bdev,
>>> +                                       rdev->data_offset << 9);
>>> +             if (!rdev || !test_bit(In_sync, &rdev->flags) ||
>>> +                 test_bit(Faulty, &rdev->flags))
>>>                        mddev->degraded++;
>>> +     }
>>>        /*
>>>         * RAID1 needs at least one disk in active
>>>         */
>>>
>>
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ