lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 25 Jan 2024 09:40:47 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Xiao Ni <xni@...hat.com>
Cc: agk@...hat.com, snitzer@...nel.org, mpatocka@...hat.com,
 dm-devel@...ts.linux.dev, song@...nel.org, jbrassow@....redhat.com,
 neilb@...e.de, heinzm@...hat.com, shli@...com, akpm@...l.org,
 linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
 yukuai1@...weicloud.com, yi.zhang@...wei.com, yangerkun@...wei.com,
 "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v2 00/11] dm-raid: fix v6.7 regressions

Hi,

在 2024/01/25 8:50, Xiao Ni 写道:
> On Wed, Jan 24, 2024 at 8:19 PM Xiao Ni <xni@...hat.com> wrote:
>>
>> On Wed, Jan 24, 2024 at 5:18 PM Yu Kuai <yukuai3@...wei.com> wrote:
>>>
>>> First regression related to stop sync thread:
>>>
>>> The lifetime of sync_thread is designed as following:
>>>
>>> 1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up
>>> daemon thread;
>>> 2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set
>>> MD_RECOVERY_RUNNING and register sync_thread;
>>> 3) Execute md_do_sync() for the actual work, if it's done or
>>> interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread;
>>> 4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear
>>> MD_RECOVERY_RUNNING and unregister sync_thread;
>>>
>>> In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4
>>> ("md: fix stopping sync thread"), however, dm-raid is not considered at
>>> that time, and following test will hang:
>>
>> Hi Kuai
>>
>> Thanks very much for the patch set. I reported the dm raid deadlock
>> when stopping dm raid and we had the patch set "[PATCH v5 md-fixes
>> 0/3] md: fix stopping sync thread" which has patch f52f5c71f3d4. So we
>> indeed considered dm-raid that time. Because we want to resolve the
>> deadlock problem. I re-look patch f52f5c71f3d4. It has two major
>> changes. One is to use a common function stop_sync_thread for stopping
>> sync thread. This can fix the deadlock problem. The second change
>> changes the way to reap sync thread. mdraid and dmraid reap sync
>> thread in __md_stop_writes. So the patch looks overweight.
>>
>> Before f52f5c71f3d4  do_md_stop release reconfig_mutex before waiting
>> sync_thread to finish. So there should not be the deadlock problem
>> which has been fixed in 130443d60b1b ("md: refactor
>> idle/frozen_sync_thread() to fix deadlock"). So we only need to change
>> __md_stop_writes to stop sync thread like do_md_stop and reap sync
>> thread directly.
>>
>> Maybe this can avoid deadlock? I'll try this way and give the test result.
> 
> Please ignore my last comment. There is something wrong. Only dmraid
> calls reap_sync_thread directly in __md_stop_writes before.
> 
> 130443d60b1b ("md: refactor idle/frozen_sync_thread() to fix
> deadlock") fixes a deadlock problem. sync io is running and user io
> comes. sync io needs to wait user io. user io needs to update
> suerblock and it needs mddev->reconfig_mutex. But user action happens
> with this lock to stop sync thread. So this is the deadlock. For
> dmraid, it doesn't update superblock like md. I'm not sure if dmraid
> has such deadlock problem. If not, dmraid can call md_reap_sync_thread
> directly, right?

Yes, the deadlock problem is because holding the lock to call
md_reap_sync_thread() directly will block daemon thread to handle IO.

However, for dm-raid superblock, I'm confused here, the code looks like
md superblock is still there, for example:

rs_update_sbs
  set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
  md_update_sb(mddev, 1);

And the code in raid1/10/5 to update md superblock doesn't have any
special handling for dm-raid. Or am I missing something here?

Thanks,
Kuai

> 
>>>
>>> shell/integrity-caching.sh
>>> shell/lvconvert-raid-reshape.sh
>>>
>>> This patch set fix the broken test by patch 1-4;
>>>   - patch 1 fix that step 4) is broken by suspended array;
>>>   - patch 2 fix that step 4) is broken by read-only array;
>>>   - patch 3 fix that step 3) is broken that md_do_sync() doesn't set
>>>   MD_RECOVERY_DONE; Noted that this patch will introdece new problem that
>>>   data will be corrupted, which will be fixed in later patches.
>>>   - patch 4 fix that setp 1) is broken that sync_thread is register and
>>>   MD_RECOVERY_RUNNING is set directly;
>>>
>>> With patch 1-4, the above test won't hang anymore, however, the test
>>> will still fail and complain that ext4 is corrupted;
>>
>> For patch3, as I mentioned today, the root cause is
>> dm-raid->rs_start_reshape sets MD_RECOVERY_WAIT. So md_do_sync returns
>> when MD_RECOVERY_WAIT is set. It's the reason why dm-raid can't stop
>> sync thread when start a new reshape. . The way in patch3 looks like a
>> workaround. We need to figure out if dm raid really needs to set
>> MD_RECOVERY_WAIT. Because now we stop sync thread in an asynchronous
>> way. So the deadlock problem which was fixed in 644e2537f (dm raid:
>> fix stripe adding reshape deadlock) may disappear. Maybe we can revert
>> the patch.

In fact, the flag MD_RECOVERY_WAIT looks like a workaround to prevent
new sync thread to start to me. I actually frozen the sync_thread during
suspend, and prevent user to unfrozen it from raid_message() in patch 6.
I think this way is better and probably MD_RECOVERY_WAIT can be removed.

> 
> After talking with Heinz, he mentioned dmraid needs this bit to avoid
> md sync thread to start during reshape. So patch3 looks good.
> 
> Best Regards
> Xiao
>>
>> Best Regards
>> Xiao
>>
>>>
>>> Second regression related to frozen sync thread:
>>>
>>> Noted that for raid456, if reshape is interrupted, then call
>>> "pers->start_reshape" will corrupt data. This is because dm-raid rely on
>>> md_do_sync() doesn't set MD_RECOVERY_DONE so that new sync_thread won't
>>> be registered, and patch 3 just break this.
>>>
>>>   - Patch 5-6 fix this problem by interrupting reshape and frozen
>>>   sync_thread in dm_suspend(), then unfrozen and continue reshape in
>>> dm_resume(). It's verified that dm-raid tests won't complain that
>>> ext4 is corrupted anymore.
>>>   - Patch 7 fix the problem that raid_message() call
>>>   md_reap_sync_thread() directly, without holding 'reconfig_mutex'.
>>>
>>> Last regression related to dm-raid456 IO concurrent with reshape:
>>>
>>> For raid456, if reshape is still in progress, then IO across reshape
>>> position will wait for reshape to make progress. However, for dm-raid,
>>> in following cases reshape will never make progress hence IO will hang:
>>>
>>> 1) the array is read-only;
>>> 2) MD_RECOVERY_WAIT is set;
>>> 3) MD_RECOVERY_FROZEN is set;
>>>
>>> After commit c467e97f079f ("md/raid6: use valid sector values to determine
>>> if an I/O should wait on the reshape") fix the problem that IO across
>>> reshape position doesn't wait for reshape, the dm-raid test
>>> shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request().
>>>
>>> For md/raid, the problem doesn't exist because:
>>>
>>> 1) If array is read-only, it can switch to read-write by ioctl/sysfs;
>>> 2) md/raid never set MD_RECOVERY_WAIT;
>>> 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold
>>>     'reconfig_mutex' anymore, it can be cleared and reshape can continue by
>>>     sysfs api 'sync_action'.
>>>
>>> However, I'm not sure yet how to avoid the problem in dm-raid yet.
>>>
>>>   - patch 9-11 fix this problem by detecting the above 3 cases in
>>>   dm_suspend(), and fail those IO directly.
>>>
>>> If user really meet the IO error, then it means they're reading the wrong
>>> data before c467e97f079f. And it's safe to read/write the array after
>>> reshape make progress successfully.
>>>
>>> Tests:
>>>
>>> I already run the following two tests many times and verified that they
>>> won't fail anymore:
>>>
>>> shell/integrity-caching.sh
>>> shell/lvconvert-raid-reshape.sh
>>>
>>> For other tests, I'm still running. However, I'm sending this patchset
>>> in case people think the fixes is not appropriate. Running the full
>>> tests will cost lots of time in my VM, and I'll update full test results
>>> soon.
>>>
>>> Yu Kuai (11):
>>>    md: don't ignore suspended array in md_check_recovery()
>>>    md: don't ignore read-only array in md_check_recovery()
>>>    md: make sure md_do_sync() will set MD_RECOVERY_DONE
>>>    md: don't register sync_thread for reshape directly
>>>    md: export helpers to stop sync_thread
>>>    dm-raid: really frozen sync_thread during suspend
>>>    md/dm-raid: don't call md_reap_sync_thread() directly
>>>    dm-raid: remove mddev_suspend/resume()
>>>    dm-raid: add a new helper prepare_suspend() in md_personality
>>>    md: export helper md_is_rdwr()
>>>    md/raid456: fix a deadlock for dm-raid456 while io concurrent with
>>>      reshape
>>>
>>>   drivers/md/dm-raid.c |  76 +++++++++++++++++++++----------
>>>   drivers/md/md.c      | 104 ++++++++++++++++++++++++++++---------------
>>>   drivers/md/md.h      |  16 +++++++
>>>   drivers/md/raid10.c  |  16 +------
>>>   drivers/md/raid5.c   |  61 +++++++++++++------------
>>>   5 files changed, 171 insertions(+), 102 deletions(-)
>>>
>>> --
>>> 2.39.2
>>>
> 
> .
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ