[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1262884716.4049.103.camel@laptop>
Date: Thu, 07 Jan 2010 18:18:36 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: paulmck@...ux.vnet.ibm.com
Cc: Josh Triplett <josh@...htriplett.org>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
akpm@...ux-foundation.org, tglx@...utronix.de,
Valdis.Kletnieks@...edu, dhowells@...hat.com, laijs@...fujitsu.com,
dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
barrier
On Thu, 2010-01-07 at 08:52 -0800, Paul E. McKenney wrote:
> On Thu, Jan 07, 2010 at 09:44:15AM +0100, Peter Zijlstra wrote:
> > On Wed, 2010-01-06 at 22:35 -0800, Josh Triplett wrote:
> > >
> > > The number of threads doesn't matter nearly as much as the number of
> > > threads typically running at a time compared to the number of
> > > processors. Of course, we can't measure that as easily, but I don't
> > > know that your proposed heuristic would approximate it well.
> >
> > Quite agreed, and not disturbing RT tasks is even more important.
>
> OK, so I stand un-Reviewed-by twice in one morning. ;-)
>
> > A simple:
> >
> > for_each_cpu(cpu, current->mm->cpu_vm_mask) {
> > if (cpu_curr(cpu)->mm == current->mm)
> > smp_call_function_single(cpu, func, NULL, 1);
> > }
> >
> > seems far preferable over anything else, if you really want you can use
> > a cpumask to copy cpu_vm_mask in and unset bits and use the mask with
> > smp_call_function_any(), but that includes having to allocate the
> > cpumask, which might or might not be too expensive for Mathieu.
>
> This would be vulnerable to the sys_membarrier() CPU seeing an old value
> of cpu_curr(cpu)->mm, and that other task seeing the old value of the
> pointer we are trying to RCU-destroy, right?
Right, so I was thinking that since you want a mb to be executed when
calling sys_membarrier(). If you observe a matching ->mm but the cpu has
since scheduled, we're good since it scheduled (but we'll still send the
IPI anyway), if we do not observe it because the task gets scheduled in
after we do the iteration we're still good because it scheduled.
As to needing to keep rcu_read_lock() around the iteration, for sure we
need that to ensure the remote task_struct reference we take is valid.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists