[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100110011255.GE25790@Krystal>
Date: Sat, 9 Jan 2010 20:12:55 -0500
From: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
akpm@...ux-foundation.org, josh@...htriplett.org,
tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
barrier
* Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> On Sat, Jan 09, 2010 at 06:16:40PM -0500, Steven Rostedt wrote:
> > On Sat, 2010-01-09 at 18:05 -0500, Steven Rostedt wrote:
> >
> > > Then we should have O(tasks) for spinlocks taken, and
> > > O(min(tasks, CPUS)) for IPIs.
> >
> > And for nr tasks >> CPUS, this may help too:
> >
> > > cpumask = 0;
> > > foreach task {
> >
> > if (cpumask == online_cpus)
> > break;
> >
> > > spin_lock(task_rq(task)->rq->lock);
> > > if (task_rq(task)->curr == task)
> > > cpu_set(task_cpu(task), cpumask);
> > > spin_unlock(task_rq(task)->rq->lock);
> > > }
> > > send_ipi(cpumask);
>
> Good point, erring on the side of sending too many IPIs is safe. One
> might even be able to just send the full set if enough of the CPUs were
> running the current process and none of the remainder were running
> real-time threads. And yes, it would then be necessary to throttle
> calls to sys_membarrier().
>
> Quickly hiding behind a suitable boulder... ;-)
:)
One quick counter-argument against IPI-to-all: that will wake up all
CPUs, including those which are asleep. Not really good for
energy-saving.
Mathieu
>
> Thanx, Paul
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists