[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6849835d-a3ac-e840-09e9-8539e7953fe4@huaweicloud.com>
Date: Thu, 8 Feb 2024 16:44:31 +0800
From: Li Nan <linan666@...weicloud.com>
To: Song Liu <song@...nel.org>, linan666@...weicloud.com
Cc: axboe@...nel.dk, linux-raid@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, yukuai3@...wei.com, yi.zhang@...wei.com,
houtao1@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH] block: fix deadlock between bd_link_disk_holder and
partition scan
在 2024/2/8 14:50, Song Liu 写道:
> On Wed, Feb 7, 2024 at 1:32 AM <linan666@...weicloud.com> wrote:
>>
>> From: Li Nan <linan122@...wei.com>
>>
>> 'open_mutex' of gendisk is used to protect open/close block devices. But
>> in bd_link_disk_holder(), it is used to protect the creation of symlink
>> between holding disk and slave bdev, which introduces some issues.
>>
>> When bd_link_disk_holder() is called, the driver is usually in the process
>> of initialization/modification and may suspend submitting io. At this
>> time, any io hold 'open_mutex', such as scanning partitions, can cause
>> deadlocks. For example, in raid:
>>
>> T1 T2
>> bdev_open_by_dev
>> lock open_mutex [1]
>> ...
>> efi_partition
>> ...
>> md_submit_bio
>> md_ioctl mddev_syspend
>> -> suspend all io
>> md_add_new_disk
>> bind_rdev_to_array
>> bd_link_disk_holder
>> try lock open_mutex [2]
>> md_handle_request
>> -> wait mddev_resume
>>
>> T1 scan partition, T2 add a new device to raid. T1 waits for T2 to resume
>> mddev, but T2 waits for open_mutex held by T1. Deadlock occurs.
>>
>> Fix it by introducing a local mutex 'holder_mutex' to replace 'open_mutex'.
>
> Is this to fix [1]? Do we need some Fixes and/or Closes tags?
>
No. Just use another way to fix [2], and both [2] and this patch can fix
the issue. I am not sure about the root cause of [1] yet.
[2] https://patchwork.kernel.org/project/linux-raid/list/?series=812045
> Could you please add steps to reproduce this issue?
We need to modify the kernel, add sleep in md_submit_bio() and md_ioctl()
as below, and then:
1. mdadm -CR /dev/md0 -l1 -n2 /dev/sd[bc] #create a raid
2. echo 1 > /sys/module/md_mod/parameters/error_inject #enable sleep
3. 'mdadm --add /dev/md0 /dev/sda' #add a disk to raid
4. submit ioctl BLKRRPART to raid within 10s.
Changes of kernel:
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 350f5b22ba6f..ce16d319edf2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -76,6 +76,8 @@ static DEFINE_SPINLOCK(pers_lock);
static const struct kobj_type md_ktype;
+static bool error_inject = false;
+
struct md_cluster_operations *md_cluster_ops;
EXPORT_SYMBOL(md_cluster_ops);
static struct module *md_cluster_mod;
@@ -372,6 +374,8 @@ static bool is_suspended(struct mddev *mddev, struct
bio *bio)
void md_handle_request(struct mddev *mddev, struct bio *bio)
{
+ if (error_inject)
+ ssleep(10);
check_suspended:
if (is_suspended(mddev, bio)) {
DEFINE_WAIT(__wait);
@@ -7752,6 +7756,8 @@ static int md_ioctl(struct block_device *bdev,
blk_mode_t mode,
*/
if (mddev->pers) {
mdu_disk_info_t info;
+ if (error_inject)
+ ssleep(10);
if (copy_from_user(&info, argp, sizeof(info)))
err = -EFAULT;
else if (!(info.state & (1<<MD_DISK_SYNC)))
@@ -10120,6 +10126,7 @@ module_param_call(start_ro, set_ro, get_ro, NULL,
S_IRUSR|S_IWUSR);
module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR);
module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR);
module_param(create_on_open, bool, S_IRUSR|S_IWUSR);
+module_param(error_inject, bool, S_IRUSR|S_IWUSR);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MD RAID framework");
--
Thanks,
Nan
Powered by blists - more mailing lists