[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d92a862-e728-5493-52c0-abc634eb6e97@huaweicloud.com>
Date: Thu, 6 Apr 2023 16:53:04 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Song Liu <song@...nel.org>, Yu Kuai <yukuai1@...weicloud.com>
Cc: Guoqing Jiang <guoqing.jiang@...ux.dev>, logang@...tatee.com,
pmenzel@...gen.mpg.de, agk@...hat.com, snitzer@...nel.org,
linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
yi.zhang@...wei.com, yangerkun@...wei.com,
Marc Smith <msmith626@...il.com>,
Logan Gunthorpe <logang@...tatee.com>,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next 1/6] Revert "md: unlock mddev before reap
sync_thread in action_store"
Hi,
在 2023/03/29 7:58, Song Liu 写道:
> On Wed, Mar 22, 2023 at 11:32 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2023/03/23 11:50, Guoqing Jiang 写道:
>>
>>> Combined your debug patch with above steps. Seems you are
>>>
>>> 1. add delay to action_store, so it can't get lock in time.
>>> 2. echo "want_replacement"**triggers md_check_recovery which can grab lock
>>> to start sync thread.
>>> 3. action_store finally hold lock to clear RECOVERY_RUNNING in reap sync
>>> thread.
>>> 4. Then the new added BUG_ON is invoked since RECOVERY_RUNNING is cleared
>>> in step 3.
>>
>> Yes, this is exactly what I did.
>>
>>> sync_thread can be interrupted once MD_RECOVERY_INTR is set which means
>>> the RUNNING
>>> can be cleared, so I am not sure the added BUG_ON is reasonable. And
>>> change BUG_ON
>>
>> I think BUG_ON() is reasonable because only md_reap_sync_thread can
>> clear it, md_do_sync will exit quictly if MD_RECOVERY_INTR is set, but
>> md_do_sync should not see that MD_RECOVERY_RUNNING is cleared, otherwise
>> there is no gurantee that only one sync_thread can be in progress.
>>
>>> like this makes more sense to me.
>>>
>>> +BUG_ON(!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) &&
>>> +!test_bit(MD_RECOVERY_INTR, &mddev->recovery));
>>
>> I think this can be reporduced likewise, md_check_recovery clear
>> MD_RECOVERY_INTR, and new sync_thread triggered by echo
>> "want_replacement" won't set this bit.
>>
>>>
>>> I think there might be racy window like you described but it should be
>>> really small, I prefer
>>> to just add a few lines like this instead of revert and introduce new
>>> lock to resolve the same
>>> issue (if it is).
>>
>> The new lock that I add in this patchset is just try to synchronize idle
>> and forzen from action_store(patch 3), I can drop it if you think this
>> is not necessary.
>>
>> The main changes is patch 4, new lines is not much and I really don't
>> like to add new flags unless we have to, current code is already hard
>> to understand...
>>
>> By the way, I'm concerned that drop the mutex to unregister sync_thread
>> might not be safe, since the mutex protects lots of stuff, and there
>> might exist other implicit dependencies.
>>
>>>
>>> TBH, I am reluctant to see the changes in the series, it can only be
>>> considered
>>> acceptable with conditions:
>>>
>>> 1. the previous raid456 bug can be fixed in this way too, hopefully Marc
>>> or others
>>> can verify it.
>>> 2. pass all the tests in mdadm
>
> AFAICT, this set looks like a better solution for this problem. But I agree
> that we need to make sure it fixes the original bug. mdadm tests are not
> in a very good shape at the moment. I will spend more time to look into
> these tests.
While I'm working on another thread to protect md_thread with rcu, I
found that this patch has other defects that can cause null-ptr-
deference in theory where md_unregister_thread(&mddev->sync_thread) can
concurrent with other context to access sync_thread, for example:
t1: md_set_readonly t2: action_store
md_unregister_thread
// 'reconfig_mutex' is not held
// 'reconfig_mutex' is held by caller
if (mddev->sync_thread)
thread = *threadp
*threadp = NULL
wake_up_process(mddev->sync_thread->tsk)
// null-ptr-deference
So, I think this revert will make more sence. 😉
Thanks,
Kuai
Powered by blists - more mailing lists