[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100110000318.GD9044@linux.vnet.ibm.com>
Date: Sat, 9 Jan 2010 16:03:18 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
akpm@...ux-foundation.org, josh@...htriplett.org,
tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
barrier
On Sat, Jan 09, 2010 at 06:16:40PM -0500, Steven Rostedt wrote:
> On Sat, 2010-01-09 at 18:05 -0500, Steven Rostedt wrote:
>
> > Then we should have O(tasks) for spinlocks taken, and
> > O(min(tasks, CPUS)) for IPIs.
>
> And for nr tasks >> CPUS, this may help too:
>
> > cpumask = 0;
> > foreach task {
>
> if (cpumask == online_cpus)
> break;
>
> > spin_lock(task_rq(task)->rq->lock);
> > if (task_rq(task)->curr == task)
> > cpu_set(task_cpu(task), cpumask);
> > spin_unlock(task_rq(task)->rq->lock);
> > }
> > send_ipi(cpumask);
Good point, erring on the side of sending too many IPIs is safe. One
might even be able to just send the full set if enough of the CPUs were
running the current process and none of the remainder were running
real-time threads. And yes, it would then be necessary to throttle
calls to sys_membarrier().
Quickly hiding behind a suitable boulder... ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists