lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20230906093720.1070929-1-linan122@huawei.com>
Date:   Wed, 6 Sep 2023 17:37:20 +0800
From:   Li Nan <linan122@...wei.com>
To:     <song@...nel.org>
CC:     <linux-raid@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <linan122@...wei.com>, <yukuai3@...wei.com>, <yi.zhang@...wei.com>,
        <houtao1@...wei.com>, <yangerkun@...wei.com>
Subject: [PATCH] md/raid1: only update stack limits with the device in use

Spare device affects array stack limits is unreasonable. For example,
create a raid1 with two 512 byte devices, the logical_block_size of array
will be 512. But after add a 4k devcie as spare, logical_block_size of
array will change as follows.

  mdadm -C /dev/md0 -n 2 -l 10 /dev/sd[ab]	//sd[ab] is 512
  //logical_block_size of md0: 512

  mdadm --add /dev/md0 /dev/sdc			//sdc is 4k
  //logical_block_size of md0: 512

  mdadm -S /dev/md0
  mdadm -A /dev/md0 /dev/sd[ab]
  //logical_block_size of md0: 4k

This will confuse users, as nothing has been changed, why did the
logical_block_size of array change?

Now, only update logical_block_size of array with the device in use.

Signed-off-by: Li Nan <linan122@...wei.com>
---
 drivers/md/raid1.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 95504612b7e2..d75c5dd89e86 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3140,19 +3140,16 @@ static int raid1_run(struct mddev *mddev)
 	if (mddev->queue)
 		blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
 
-	rdev_for_each(rdev, mddev) {
-		if (!mddev->gendisk)
-			continue;
-		disk_stack_limits(mddev->gendisk, rdev->bdev,
-				  rdev->data_offset << 9);
-	}
-
 	mddev->degraded = 0;
-	for (i = 0; i < conf->raid_disks; i++)
-		if (conf->mirrors[i].rdev == NULL ||
-		    !test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
-		    test_bit(Faulty, &conf->mirrors[i].rdev->flags))
+	for (i = 0; i < conf->raid_disks; i++) {
+		rdev = conf->mirrors[i].rdev;
+		if (rdev && mddev->gendisk)
+			disk_stack_limits(mddev->gendisk, rdev->bdev,
+					  rdev->data_offset << 9);
+		if (!rdev || !test_bit(In_sync, &rdev->flags) ||
+		    test_bit(Faulty, &rdev->flags))
 			mddev->degraded++;
+	}
 	/*
 	 * RAID1 needs at least one disk in active
 	 */
-- 
2.39.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ