[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <059a8c6a-628f-ed18-0d5e-7ad3cad93813@huaweicloud.com>
Date: Wed, 26 Apr 2023 11:57:32 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Song Liu <song@...nel.org>, Yu Kuai <yukuai1@...weicloud.com>
Cc: logang@...tatee.com, axboe@...nel.dk, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
yi.zhang@...wei.com, yangerkun@...wei.com,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next v7 5/5] md: protect md_thread with rcu
Hi,
在 2023/04/26 11:20, Song Liu 写道:
> On Tue, Apr 25, 2023 at 4:54 AM Yu Kuai <yukuai1@...weicloud.com> wrote:
>>
>> From: Yu Kuai <yukuai3@...wei.com>
>>
>> Our test reports a uaf for 'mddev->sync_thread':
>>
>> T1 T2
>> md_start_sync
>> md_register_thread
>> // mddev->sync_thread is set
>> raid1d
>> md_check_recovery
>> md_reap_sync_thread
>> md_unregister_thread
>> kfree
>>
>> md_wakeup_thread
>> wake_up
>> ->sync_thread was freed
>>
>> Root cause is that there is a small windown between register thread and
>> wake up thread, where the thread can be freed concurrently.
>>
>> Currently, a global spinlock 'pers_lock' is borrowed to protect
>> 'mddev->thread', this problem can be fixed likewise, however, there are
>> similar problems elsewhere, and use a global lock for all the cases is
>> not good.
>>
>> This patch protect all md_thread with rcu.
>>
>> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
>> ---
>> block/blk-cgroup.c | 3 ++
>> drivers/md/md-bitmap.c | 10 ++++--
>> drivers/md/md-cluster.c | 17 ++++++----
>> drivers/md/md-multipath.c | 4 +--
>> drivers/md/md.c | 69 ++++++++++++++++++---------------------
>> drivers/md/md.h | 8 ++---
>> drivers/md/raid1.c | 7 ++--
>> drivers/md/raid1.h | 2 +-
>> drivers/md/raid10.c | 20 +++++++-----
>> drivers/md/raid10.h | 2 +-
>> drivers/md/raid5-cache.c | 22 ++++++++-----
>> drivers/md/raid5.c | 15 +++++----
>> drivers/md/raid5.h | 2 +-
>> 13 files changed, 100 insertions(+), 81 deletions(-)
>>
>> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
>> index 1c1ebeb51003..0ecb4cce8af2 100644
>> --- a/block/blk-cgroup.c
>> +++ b/block/blk-cgroup.c
>> @@ -527,6 +527,9 @@ static void blkg_destroy_all(struct gendisk *disk)
>> list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
>> struct blkcg *blkcg = blkg->blkcg;
>>
>> + if (hlist_unhashed(&blkg->blkcg_node))
>> + continue;
>> +
>
> This change is not related, right?
Yes, it's not related. Sorry that I missed another fix patch into
this...
>
> I don't think we can rush this change in the 6.4 merge window. Let's
> test it more thoroughly and ship it in the next merge window.
Of course.
Thanks,
Kuai
Powered by blists - more mailing lists