[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <206890579.32344.1507241955010.JavaMail.zimbra@efficios.com>
Date: Thu, 5 Oct 2017 22:19:15 +0000 (UTC)
From: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To: Andrea Parri <parri.andrea@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
dipankar <dipankar@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Josh Triplett <josh@...htriplett.org>,
Thomas Gleixner <tglx@...utronix.de>,
rostedt <rostedt@...dmis.org>,
David Howells <dhowells@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
fweisbec <fweisbec@...il.com>, Oleg Nesterov <oleg@...hat.com>,
Boqun Feng <boqun.feng@...il.com>,
Andrew Hunter <ahh@...gle.com>,
maged michael <maged.michael@...il.com>,
gromer <gromer@...gle.com>, Avi Kivity <avi@...lladb.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Dave Watson <davejwatson@...com>,
Alan Stern <stern@...land.harvard.edu>,
Will Deacon <will.deacon@....com>,
Andy Lutomirski <luto@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Nicholas Piggin <npiggin@...il.com>,
linuxppc-dev <linuxppc-dev@...ts.ozlabs.org>,
linux-arch <linux-arch@...r.kernel.org>
Subject: Re: [PATCH tip/core/rcu 1/3] membarrier: Provide register expedited
private command
----- On Oct 5, 2017, at 6:02 PM, Andrea Parri parri.andrea@...il.com wrote:
> On Thu, Oct 05, 2017 at 04:02:06PM +0000, Mathieu Desnoyers wrote:
>> ----- On Oct 5, 2017, at 8:12 AM, Peter Zijlstra peterz@...radead.org wrote:
>>
>> > On Wed, Oct 04, 2017 at 02:37:53PM -0700, Paul E. McKenney wrote:
>> >> diff --git a/arch/powerpc/kernel/membarrier.c b/arch/powerpc/kernel/membarrier.c
>> >> new file mode 100644
>> >> index 000000000000..b0d79a5f5981
>> >> --- /dev/null
>> >> +++ b/arch/powerpc/kernel/membarrier.c
>> >> @@ -0,0 +1,45 @@
>> >
>> >> +void membarrier_arch_register_private_expedited(struct task_struct *p)
>> >> +{
>> >> + struct task_struct *t;
>> >> +
>> >> + if (get_nr_threads(p) == 1) {
>> >> + set_thread_flag(TIF_MEMBARRIER_PRIVATE_EXPEDITED);
>> >> + return;
>> >> + }
>> >> + /*
>> >> + * Coherence of TIF_MEMBARRIER_PRIVATE_EXPEDITED against thread
>> >> + * fork is protected by siglock.
>> >> + */
>> >> + spin_lock(&p->sighand->siglock);
>> >> + for_each_thread(p, t)
>> >> + set_ti_thread_flag(task_thread_info(t),
>> >> + TIF_MEMBARRIER_PRIVATE_EXPEDITED);
>> >
>> > I'm not sure this works correctly vs CLONE_VM without CLONE_THREAD.
>>
>> The intent here is to hold the sighand siglock to provide mutual
>> exclusion against invocation of membarrier_fork(p, clone_flags)
>> by copy_process().
>>
>> copy_process() grabs spin_lock(¤t->sighand->siglock) for both
>> CLONE_THREAD and not CLONE_THREAD flags.
>>
>> What am I missing here ?
>>
>> >
>> >> + spin_unlock(&p->sighand->siglock);
>> >> + /*
>> >> + * Ensure all future scheduler executions will observe the new
>> >> + * thread flag state for this process.
>> >> + */
>> >> + synchronize_sched();
>> >
>> > This relies on the flag being read inside rq->lock, right?
>>
>> Yes. The flag is read by membarrier_arch_switch_mm(), invoked
>> within switch_mm_irqs_off(), called by context_switch() before
>> finish_task_switch() releases the rq lock.
>
> I fail to grap the relation between this synchronize_sched() and rq->lock.
>
> (Besides, we made no reference to rq->lock while discussing:
>
> https://github.com/paulmckrcu/litmus/commit/47039df324b60ace0cf7b2403299580be730119b
> replace membarrier_arch_sched_in with switch_mm_irqs_off )
>
> Could you elaborate?
Hi Andrea,
AFAIU the scheduler rq->lock is held while preemption is disabled.
synchronize_sched() is used here to ensure that all pre-existing
preempt-off critical sections have completed.
So saying that we use synchronize_sched() to synchronize with rq->lock
would be stretching the truth a bit. It's actually only true because the
scheduler holding the rq->lock is surrounded by a preempt-off
critical section.
Thanks,
Mathieu
>
> Andrea
>
>
>>
>> Is the comment clear enough, or do you have suggestions for
>> improvements ?
>
>
>
>>
>> Thanks,
>>
>> Mathieu
>>
>> >
>> > > +}
>>
>> --
>> Mathieu Desnoyers
>> EfficiOS Inc.
> > http://www.efficios.com
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
Powered by blists - more mailing lists