[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43b0b2f4-17c0-61d2-9c41-0595fb6f2efc@huaweicloud.com>
Date: Thu, 7 Sep 2023 10:04:11 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Mariusz Tkaczyk <mariusz.tkaczyk@...ux.intel.com>,
AceLan Kao <acelan@...il.com>
Cc: Yu Kuai <yukuai1@...weicloud.com>, Song Liu <song@...nel.org>,
Guoqing Jiang <guoqing.jiang@...ux.dev>,
Bagas Sanjaya <bagasdotme@...il.com>,
Christoph Hellwig <hch@....de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux Regressions <regressions@...ts.linux.dev>,
Linux RAID <linux-raid@...r.kernel.org>,
"yukuai (C)" <yukuai3@...wei.com>,
"yangerkun@...wei.com" <yangerkun@...wei.com>
Subject: Re: Infiniate systemd loop when power off the machine with multiple
MD RAIDs
Hi,
在 2023/09/06 18:27, Mariusz Tkaczyk 写道:
> On Wed, 6 Sep 2023 14:26:30 +0800
> AceLan Kao <acelan@...il.com> wrote:
>
>> From previous testing, I don't think it's an issue in systemd, so I
>> did a simple test and found the issue is gone.
>> You only need to add a small delay in md_release(), then the issue
>> can't be reproduced.
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 78be7811a89f..ef47e34c1af5 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -7805,6 +7805,7 @@ static void md_release(struct gendisk *disk)
>> {
>> struct mddev *mddev = disk->private_data;
>>
>> + msleep(10);
>> BUG_ON(!mddev);
>> atomic_dec(&mddev->openers);
>> mddev_put(mddev);
>
> I have repro and I tested it on my setup. It is not working for me.
> My setup could be more "advanced" to maximalize chance of reproduction:
>
> # cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4] [raid10] [raid0]
> md121 : active raid0 nvme2n1[1] nvme5n1[0]
> 7126394880 blocks super external:/md127/0 128k chunks
>
> md122 : active raid10 nvme6n1[3] nvme4n1[2] nvme1n1[1] nvme7n1[0]
> 104857600 blocks super external:/md126/0 64K chunks 2 near-copies [4/4]
> [UUUU]
>
> md123 : active raid5 nvme6n1[3] nvme4n1[2] nvme1n1[1] nvme7n1[0]
> 2655765504 blocks super external:/md126/1 level 5, 32k chunk, algorithm 0
> [4/4] [UUUU]
>
> md124 : active raid1 nvme0n1[1] nvme3n1[0]
> 99614720 blocks super external:/md125/0 [2/2] [UU]
>
> md125 : inactive nvme3n1[1](S) nvme0n1[0](S)
> 10402 blocks super external:imsm
>
> md126 : inactive nvme7n1[3](S) nvme1n1[2](S) nvme6n1[1](S) nvme4n1[0](S)
> 20043 blocks super external:imsm
>
> md127 : inactive nvme2n1[1](S) nvme5n1[0](S)
> 10402 blocks super external:imsm
>
> I have almost 99% repro ratio, slowly moving forward..
>
> It is endless loop because systemd-shutdown sends ioctl "stop_array" which is
> successful but array is not stopped. For that reason it sets "changed = true".
How does systemd-shutdown judge if array is stopped? cat /proc/mdstat or
ls /dev/md* or other way?
>
> Systemd-shutdown see the change and retries to check if there is something else
> which can be stopped now, and again, again...
>
> I will check what is returned first, it could be 0 or it could be positive
> errno (nit?) because systemd cares "if(r < 0)".
I do noticed that there are lots of log about md123 stopped:
[ 1371.834034] md122:systemd-shutdow bd_prepare_to_claim return -16
[ 1371.840294] md122:systemd-shutdow blkdev_get_by_dev return -16
[ 1371.846845] md: md123 stopped.
[ 1371.850155] md122:systemd-shutdow bd_prepare_to_claim return -16
[ 1371.856411] md122:systemd-shutdow blkdev_get_by_dev return -16
[ 1371.862941] md: md123 stopped.
And md_ioctl->do_md_stop doesn't have error path after printing this
log, hence 0 will be returned to user.
The normal case is that:
open md123
ioctl STOP_ARRAY -> all rdev should be removed from array
close md123 -> mddev will finally be freed by:
md_release
mddev_put
set_bit(MD_DELETED, &mddev->flags) -> user shound not see this mddev
queue_work(md_misc_wq, &mddev->del_work)
mddev_delayed_delete
kobject_put(&mddev->kobj)
md_kobj_release
del_gendisk
md_free_disk
mddev_free
Now that you can reporduce this problem 99%, can you dig deeper and find
out what is wrong?
Thanks,
Kuai
>
> Thanks,
> Mariusz
>
> .
>
Powered by blists - more mailing lists