[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPhsuW4nNJLuPff9trbsKYsBSQGtBC++Zo1Y3JQHphX2j4kMVw@mail.gmail.com>
Date: Tue, 14 Mar 2023 13:08:16 -0700
From: Song Liu <song@...nel.org>
To: Yu Kuai <yukuai3@...wei.com>
Cc: Yu Kuai <yukuai1@...weicloud.com>, agk@...hat.com,
snitzer@...nel.org, linux-kernel@...r.kernel.org,
linux-raid@...r.kernel.org, yi.zhang@...wei.com,
yangerkun@...wei.com
Subject: Re: [PATCH -next 5/5] md: protect md_thread with a new disk level
spin lock
On Tue, Mar 14, 2023 at 9:58 AM Song Liu <song@...nel.org> wrote:
>
> On Tue, Mar 14, 2023 at 3:54 AM Yu Kuai <yukuai3@...wei.com> wrote:
> >
> > Hi, song!
> >
> > 在 2023/03/11 17:31, Yu Kuai 写道:
> > > From: Yu Kuai <yukuai3@...wei.com>
> > >
> > > Our test reports a uaf for 'mddev->sync_thread':
> > >
> > > T1 T2
> > > md_start_sync
> > > md_register_thread
> > > raid1d
> > > md_check_recovery
> > > md_reap_sync_thread
> > > md_unregister_thread
> > > kfree
> > >
> > > md_wakeup_thread
> > > wake_up
> > > ->sync_thread was freed
> > >
> > > Currently, a global spinlock 'pers_lock' is borrowed to protect
> > > 'mddev->thread', this problem can be fixed likewise, however, there might
> > > be similar problem for other md_thread, and I really don't like the idea to
> > > borrow a global lock.
> > >
> > > This patch use a disk level spinlock to protect md_thread in relevant apis.
> > >
> > > Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> > > ---
> > > drivers/md/md.c | 23 ++++++++++-------------
> > > drivers/md/md.h | 1 +
> > > 2 files changed, 11 insertions(+), 13 deletions(-)
> > >
> > > diff --git a/drivers/md/md.c b/drivers/md/md.c
> > > index ab9299187cfe..a952978884a5 100644
> > > --- a/drivers/md/md.c
> > > +++ b/drivers/md/md.c
> > > @@ -663,6 +663,7 @@ void mddev_init(struct mddev *mddev)
> > > atomic_set(&mddev->active, 1);
> > > atomic_set(&mddev->openers, 0);
> > > spin_lock_init(&mddev->lock);
> > > + spin_lock_init(&mddev->thread_lock);
> > > atomic_set(&mddev->flush_pending, 0);
> > > init_waitqueue_head(&mddev->sb_wait);
> > > init_waitqueue_head(&mddev->recovery_wait);
> > > @@ -801,13 +802,8 @@ void mddev_unlock(struct mddev *mddev)
> > > } else
> > > mutex_unlock(&mddev->reconfig_mutex);
> > >
> > > - /* As we've dropped the mutex we need a spinlock to
> > > - * make sure the thread doesn't disappear
> > > - */
> > > - spin_lock(&pers_lock);
> > > md_wakeup_thread(&mddev->thread, mddev);
> > > wake_up(&mddev->sb_wait);
> > > - spin_unlock(&pers_lock);
> > > }
> > > EXPORT_SYMBOL_GPL(mddev_unlock);
> > >
> > > @@ -7895,13 +7891,16 @@ static int md_thread(void *arg)
> > >
> > > void md_wakeup_thread(struct md_thread **threadp, struct mddev *mddev)
> > > {
> > > - struct md_thread *thread = *threadp;
> > > + struct md_thread *thread;
> > >
> > > + spin_lock(&mddev->thread_lock);
> > > + thread = *threadp;
> > > if (thread) {
> > > pr_debug("md: waking up MD thread %s.\n", thread->tsk->comm);
> > > set_bit(THREAD_WAKEUP, &thread->flags);
> > > wake_up(&thread->wqueue);
> > > }
> > > + spin_unlock(&mddev->thread_lock);
> >
> > I just found that md_wakeup_thread can be called from irq context:
> >
> > md_safemode_timeout
> > md_wakeup_thread
> >
> > And I need to use irq safe spinlock apis here.
> >
> > Can you drop this verion from md-next? I'll send a new version after I
> > verified that there are no new regression, at least for mdadm tests.
>
> I will drop it from md-next. Please send a new version.
To clarify: I dropped the whole set from md-next. Please resend the set
after fixing the issue.
Thanks,
Song
Powered by blists - more mailing lists