[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANpmjNMqTupyPc6-PCviB1HPTHawjzNL1r1gmdQqnwCvE=BNNA@mail.gmail.com>
Date: Sat, 5 Oct 2019 15:33:07 +0200
From: Marco Elver <elver@...gle.com>
To: Christian Brauner <christian.brauner@...ntu.com>
Cc: syzbot+c5d03165a1bd1dead0c1@...kaller.appspotmail.com,
bsingharora@...il.com, LKML <linux-kernel@...r.kernel.org>,
syzkaller-bugs@...glegroups.com
Subject: Re: [PATCH] taskstats: fix data-race
On Sat, 5 Oct 2019 at 13:28, Christian Brauner
<christian.brauner@...ntu.com> wrote:
>
> When assiging and testing taskstats in taskstats
> taskstats_exit() there's a race around writing and reading sig->stats.
>
> cpu0:
> task calls exit()
> do_exit()
> -> taskstats_exit()
> -> taskstats_tgid_alloc()
> The task takes sighand lock and assigns new stats to sig->stats.
>
> cpu1:
> task catches signal
> do_exit()
> -> taskstats_tgid_alloc()
> -> taskstats_exit()
> The tasks reads sig->stats __without__ holding sighand lock seeing
> garbage.
Is the task seeing garbage reading the data pointed to by stats, or is
this just the pointer that would be garbage?
My only observation here is that the previous version was trying to do
double-checked locking, to avoid taking the lock if sig->stats was
already set. The obvious problem with the previous version is plain
read/write and missing memory ordering: the write inside the critical
section should be smp_store_release and there should only be one
smp_load_acquire at the start.
Maybe I missed something somewhere, but maybe my suggestion below
would be an equivalent fix without always having to take the lock to
assign the pointer? If performance is not critical here, then it's
probably not worth it.
Thanks,
-- Marco
diff --git a/kernel/taskstats.c b/kernel/taskstats.c
index 13a0f2e6ebc2..f58dd285a44b 100644
--- a/kernel/taskstats.c
+++ b/kernel/taskstats.c
@@ -554,25 +554,31 @@ static int taskstats_user_cmd(struct sk_buff
*skb, struct genl_info *info)
static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
{
struct signal_struct *sig = tsk->signal;
- struct taskstats *stats;
+ struct taskstats *stats_new, *stats;
- if (sig->stats || thread_group_empty(tsk))
+ /* acquire load to make pointed-to data visible */
+ stats = smp_load_acquire(&sig->stats);
+ if (stats || thread_group_empty(tsk))
goto ret;
/* No problem if kmem_cache_zalloc() fails */
- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
spin_lock_irq(&tsk->sighand->siglock);
- if (!sig->stats) {
- sig->stats = stats;
- stats = NULL;
+ stats = sig->stats;
+ if (!stats) {
+ stats = stats_new;
+ /* release store to order zalloc before */
+ smp_store_release(&sig->stats, stats_new);
+ stats_new = NULL;
}
spin_unlock_irq(&tsk->sighand->siglock);
- if (stats)
- kmem_cache_free(taskstats_cache, stats);
+ if (stats_new)
+ kmem_cache_free(taskstats_cache, stats_new);
+
ret:
- return sig->stats;
+ return stats;
}
/* Send pid data out on exit */
Powered by blists - more mailing lists