lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <216fbc61-4f71-3796-5ec1-2e4cfa815ced@huaweicloud.com>
Date: Thu, 25 Jan 2024 09:08:59 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Song Liu <song@...nel.org>, mpatocka@...hat.com,
 dm-devel@...ts.linux.dev, snitzer@...nel.org, agk@...hat.com
Cc: xni@...hat.com, jbrassow@....redhat.com, neilb@...e.de,
 heinzm@...hat.com, shli@...com, akpm@...l.org, linux-kernel@...r.kernel.org,
 linux-raid@...r.kernel.org, yukuai1@...weicloud.com, yi.zhang@...wei.com,
 yangerkun@...wei.com, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v2 00/11] dm-raid: fix v6.7 regressions

Hi,

在 2024/01/25 8:46, Song Liu 写道:
> On Wed, Jan 24, 2024 at 1:18 AM Yu Kuai <yukuai3@...wei.com> wrote:
>>
>> First regression related to stop sync thread:
>>
>> The lifetime of sync_thread is designed as following:
>>
>> 1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up
>> daemon thread;
>> 2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set
>> MD_RECOVERY_RUNNING and register sync_thread;
>> 3) Execute md_do_sync() for the actual work, if it's done or
>> interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread;
>> 4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear
>> MD_RECOVERY_RUNNING and unregister sync_thread;
>>
>> In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4
>> ("md: fix stopping sync thread"), however, dm-raid is not considered at
>> that time, and following test will hang:
>>
>> shell/integrity-caching.sh
>> shell/lvconvert-raid-reshape.sh
>>
>> This patch set fix the broken test by patch 1-4;
>>   - patch 1 fix that step 4) is broken by suspended array;
>>   - patch 2 fix that step 4) is broken by read-only array;
>>   - patch 3 fix that step 3) is broken that md_do_sync() doesn't set
>>   MD_RECOVERY_DONE; Noted that this patch will introdece new problem that
>>   data will be corrupted, which will be fixed in later patches.
>>   - patch 4 fix that setp 1) is broken that sync_thread is register and
>>   MD_RECOVERY_RUNNING is set directly;
>>
>> With patch 1-4, the above test won't hang anymore, however, the test
>> will still fail and complain that ext4 is corrupted;
>>
>> Second regression related to frozen sync thread:
>>
>> Noted that for raid456, if reshape is interrupted, then call
>> "pers->start_reshape" will corrupt data. This is because dm-raid rely on
>> md_do_sync() doesn't set MD_RECOVERY_DONE so that new sync_thread won't
>> be registered, and patch 3 just break this.
>>
>>   - Patch 5-6 fix this problem by interrupting reshape and frozen
>>   sync_thread in dm_suspend(), then unfrozen and continue reshape in
>> dm_resume(). It's verified that dm-raid tests won't complain that
>> ext4 is corrupted anymore.
>>   - Patch 7 fix the problem that raid_message() call
>>   md_reap_sync_thread() directly, without holding 'reconfig_mutex'.
>>
>> Last regression related to dm-raid456 IO concurrent with reshape:
>>
>> For raid456, if reshape is still in progress, then IO across reshape
>> position will wait for reshape to make progress. However, for dm-raid,
>> in following cases reshape will never make progress hence IO will hang:
>>
>> 1) the array is read-only;
>> 2) MD_RECOVERY_WAIT is set;
>> 3) MD_RECOVERY_FROZEN is set;
>>
>> After commit c467e97f079f ("md/raid6: use valid sector values to determine
>> if an I/O should wait on the reshape") fix the problem that IO across
>> reshape position doesn't wait for reshape, the dm-raid test
>> shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request().
>>
>> For md/raid, the problem doesn't exist because:
>>
>> 1) If array is read-only, it can switch to read-write by ioctl/sysfs;
>> 2) md/raid never set MD_RECOVERY_WAIT;
>> 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold
>>     'reconfig_mutex' anymore, it can be cleared and reshape can continue by
>>     sysfs api 'sync_action'.
>>
>> However, I'm not sure yet how to avoid the problem in dm-raid yet.
>>
>>   - patch 9-11 fix this problem by detecting the above 3 cases in
>>   dm_suspend(), and fail those IO directly.
>>
>> If user really meet the IO error, then it means they're reading the wrong
>> data before c467e97f079f. And it's safe to read/write the array after
>> reshape make progress successfully.
> 
> c467e97f079f got back ported to stable kernels (6.6.13, for example). We
> will need some fixes for them (to fix shell/lvconvert-raid-reshape.sh).
> 
> Mikulas and folks, please help review the analysis above and dm-raid
> changes. The failure was triggered by c467e97f079f. However, the commit
> is doing the right thing, so we really shouldn't revert it.
> 
>>
>> Tests:
>>
>> I already run the following two tests many times and verified that they
>> won't fail anymore:
>>
>> shell/integrity-caching.sh
>> shell/lvconvert-raid-reshape.sh
> 
> shell/lvconvert-raid-reshape-linear_to_raid6-single-type.sh is failing
> with upstream + this set. (I need to fix some trivial compilation errors,
> which are probably last minute typos).

I'm running test for this patchset overnight in my vm, and this test has
been ran for 9 times and all passed. Looks like I can't reporduce this
in my vm.

Thanks,
Kuai

> 
> Thanks,
> Song
> 
> .
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ