[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNNc8LitzytmUD1EE-yJ1r3tE_NiZPNf05ay10tRmBzfxg@mail.gmail.com>
Date: Sat, 5 Oct 2019 16:34:22 +0200
From: Marco Elver <elver@...gle.com>
To: Christian Brauner <christian.brauner@...ntu.com>
Cc: syzbot+c5d03165a1bd1dead0c1@...kaller.appspotmail.com,
bsingharora@...il.com, LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs@...glegroups.com
Subject: Re: [PATCH] taskstats: fix data-race
On Sat, 5 Oct 2019 at 16:15, Christian Brauner
<christian.brauner@...ntu.com> wrote:
>
> On Sat, Oct 05, 2019 at 03:33:07PM +0200, Marco Elver wrote:
> > On Sat, 5 Oct 2019 at 13:28, Christian Brauner
> > <christian.brauner@...ntu.com> wrote:
> > >
> > > When assiging and testing taskstats in taskstats
> > > taskstats_exit() there's a race around writing and reading sig->stats.
> > >
> > > cpu0:
> > > task calls exit()
> > > do_exit()
> > > -> taskstats_exit()
> > > -> taskstats_tgid_alloc()
> > > The task takes sighand lock and assigns new stats to sig->stats.
> > >
> > > cpu1:
> > > task catches signal
> > > do_exit()
> > > -> taskstats_tgid_alloc()
> > > -> taskstats_exit()
> > > The tasks reads sig->stats __without__ holding sighand lock seeing
> > > garbage.
> >
> > Is the task seeing garbage reading the data pointed to by stats, or is
> > this just the pointer that would be garbage?
>
> I expect the pointer to be garbage.
>
> >
> > My only observation here is that the previous version was trying to do
> > double-checked locking, to avoid taking the lock if sig->stats was
> > already set. The obvious problem with the previous version is plain
> > read/write and missing memory ordering: the write inside the critical
> > section should be smp_store_release and there should only be one
> > smp_load_acquire at the start.
> >
> > Maybe I missed something somewhere, but maybe my suggestion below
> > would be an equivalent fix without always having to take the lock to
> > assign the pointer? If performance is not critical here, then it's
> > probably not worth it.
>
> The only point of contention is when the whole thread-group exits (e.g.
> via exit_group(2) since threads in a thread-group share signal struct).
> The reason I didn't do memory barriers was because we need to take the
> spinlock for the actual list manipulation anyway.
> But I don't mind incorporating the acquire/release.
Thanks for the clarification. Since you no longer do double-checked
locking, explicit memory barriers shouldn't be needed because
spin_lock/unlock already provides acquire/release ordering.
-- Marco
> Christian
>
> >
> > Thanks,
> > -- Marco
> >
> > diff --git a/kernel/taskstats.c b/kernel/taskstats.c
> > index 13a0f2e6ebc2..f58dd285a44b 100644
> > --- a/kernel/taskstats.c
> > +++ b/kernel/taskstats.c
> > @@ -554,25 +554,31 @@ static int taskstats_user_cmd(struct sk_buff
> > *skb, struct genl_info *info)
> > static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
> > {
> > struct signal_struct *sig = tsk->signal;
> > - struct taskstats *stats;
> > + struct taskstats *stats_new, *stats;
> >
> > - if (sig->stats || thread_group_empty(tsk))
> > + /* acquire load to make pointed-to data visible */
> > + stats = smp_load_acquire(&sig->stats);
> > + if (stats || thread_group_empty(tsk))
> > goto ret;
> >
> > /* No problem if kmem_cache_zalloc() fails */
> > - stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
> > + stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
> >
> > spin_lock_irq(&tsk->sighand->siglock);
> > - if (!sig->stats) {
> > - sig->stats = stats;
> > - stats = NULL;
> > + stats = sig->stats;
> > + if (!stats) {
> > + stats = stats_new;
> > + /* release store to order zalloc before */
> > + smp_store_release(&sig->stats, stats_new);
> > + stats_new = NULL;
> > }
> > spin_unlock_irq(&tsk->sighand->siglock);
> >
> > - if (stats)
> > - kmem_cache_free(taskstats_cache, stats);
> > + if (stats_new)
> > + kmem_cache_free(taskstats_cache, stats_new);
> > +
> > ret:
> > - return sig->stats;
> > + return stats;
> > }
> >
> > /* Send pid data out on exit */
Powered by blists - more mailing lists