[<prev] [next>] [day] [month] [year] [list]
Message-ID: <414ae6e0-604a-f4d3-d7ce-260bd8564927@huaweicloud.com>
Date: Tue, 23 Sep 2025 08:43:10 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Heinz Mauelshagen <heinzm@...hat.com>, Yu Kuai <yukuai1@...weicloud.com>
Cc: song@...nel.org, linux-raid@...r.kernel.org,
linux-kernel@...r.kernel.org, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH] md raid: fix hang when stopping arrays with metadata
through dm-raid
Hi,
在 2025/09/22 21:32, Heinz Mauelshagen 写道:
> Hi Kuai,
>
> you're right, it should be flushed in the dm-raid's raid_postsuspend()
> function by calling md_stop_writes() when upstack I/O is quiesced already.
> So we can't use mddev_is_dm() in __md_stop_writes() as it prevents flushing
> the bitmap with the current patch.
>
> md_is_rdwr() looks is the appropriate condition, i.e. when true flush, when
> false, don't.
>
> If md_is_rdwr() is ok for that logic, I'll create another patch leaving it
> true in postsuspend and false in the destructor call to md_stop() from
> dm-raid.
>
> Thoughts?
>
Yeah, this sounds correct.
Thanks,
Kuai
> - lvmguy
>
>
> On Mon, Sep 22, 2025 at 3:09 AM Yu Kuai <yukuai1@...weicloud.com> wrote:
>
>> Hi,
>>
>> 在 2025/09/18 21:42, Heinz Mauelshagen 写道:
>>> When using device-mapper's dm-raid target, stopping a RAID array can
>> cause the
>>> system to hang under specific conditions.
>>>
>>> This occurs when:
>>>
>>> - A dm-raid managed device tree is suspended from top to bottom
>>> (the top-level RAID device is suspended first, followed by its
>>> underlying metadata and data devices)
>>>
>>> - The top-level RAID device is then removed
>>>
>>> The hang happens because removing the top-level device triggers
>> md_stop() from the
>>> dm-raid destructor. This function attempts to flush the write-intent
>> bitmap, which
>>> requires writing bitmap superblocks to the metadata sub-devices.
>> However, since
>>> these metadata devices are already suspended, the write operations
>> cannot complete,
>>> causing the system to hang.
>>>
>>> Fix:
>>>
>>> - Prevent bitmap flushing when md_stop() is called from dm-raid contexts
>>> and avoid a quiescing/unquescing cycle which could also cause I/O
>>
>> If bitmap flush is skipped, then bitmap can still be dirty after dm-raid
>> is stopped, and the next time when dm-raid is reloaded, looks like there
>> will be unnecessary data resync because there are dirty bits?
>>
>> Thanks,
>> Kuai
>>
>>>
>>> - Avoid any I/O operations that might occur during the
>> quiesce/unquiesce process in md_stop()
>>>
>>> This ensures that RAID array teardown can complete successfully even
>> when the
>>> underlying devices are in a suspended state.
>>>
>>> Signed-off-by: Heinz Mauelshagen <heinzm@...hat.com>
>>> ---
>>> drivers/md/md.c | 12 +++++++-----
>>> 1 file changed, 7 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>>> index 4e033c26fdd4..53e15bdd9ab2 100644
>>> --- a/drivers/md/md.c
>>> +++ b/drivers/md/md.c
>>> @@ -6541,12 +6541,14 @@ static void __md_stop_writes(struct mddev *mddev)
>>> {
>>> timer_delete_sync(&mddev->safemode_timer);
>>>
>>> - if (mddev->pers && mddev->pers->quiesce) {
>>> - mddev->pers->quiesce(mddev, 1);
>>> - mddev->pers->quiesce(mddev, 0);
>>> - }
>>> + if (!mddev_is_dm(mddev)) {
>>> + if (mddev->pers && mddev->pers->quiesce) {
>>> + mddev->pers->quiesce(mddev, 1);
>>> + mddev->pers->quiesce(mddev, 0);
>>> + }
>>>
>>> - mddev->bitmap_ops->flush(mddev);
>>> + mddev->bitmap_ops->flush(mddev);
>>> + }
>>>
>>> if (md_is_rdwr(mddev) &&
>>> ((!mddev->in_sync && !mddev_is_clustered(mddev)) ||
>>>
>>
>>
>
Powered by blists - more mailing lists