[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150807153828.GE14626@mtj.duckdns.org>
Date: Fri, 7 Aug 2015 11:38:28 -0400
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, riel@...hat.com, dedekind1@...il.com,
linux-kernel@...r.kernel.org, mgorman@...e.de, rostedt@...dmis.org,
juri.lelli@....com, Oleg Nesterov <oleg@...hat.com>
Subject: Re: [RFC][PATCH 1/4] sched: Fix a race between __kthread_bind() and
sched_setaffinity()
Hello,
On Fri, Aug 07, 2015 at 05:29:56PM +0200, Peter Zijlstra wrote:
> Even if we were to strictly order those stores you could have (note
> there is no matching barrier, as there is only the one load, so ordering
> cannot help):
>
> __kthread_bind()
> <SYSCALL>
> sched_setaffinity()
> if (p->flags & PF_NO_SETAFFINITY) /* false-not-taken */
> p->flags |= PF_NO_SETAFFINITY;
> smp_wmb();
> do_set_cpus_allowed();
> set_cpus_allowed_ptr()
>
> > I think the code was better before. Can't we just revert workqueue.c
> > part?
>
> I agree that the new argument isn't pretty, but I cannot see how
> workqueues would not be affected by this.
So, the problem there is that __kthread_bind() doesn't grab the same
lock that the syscall side grabs but workqueue used
set_cpus_allowed_ptr() which goes through the rq locking, so as long
as the check on syscall side is movied inside rq lock, it should be
fine.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists