lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 09 Jan 2010 18:05:27 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, josh@...htriplett.org,
	tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
 barrier

On Sat, 2010-01-09 at 14:20 -0500, Mathieu Desnoyers wrote:

> > > Using the spinlocks adds about 3s for 10,000,000 sys_membarrier() calls
> > > or a 8-core system, for an added 300 ns/core per call.
> > > 
> > > So the overhead of taking the task lock is about twice higher, per core,
> > > than the overhead of the IPIs. This is understandable if the
> > > architecture does an IPI broadcast: the scalability problem then boils
> > > down to exchange cache-lines to inform the ipi sender that the other
> > > cpus have completed. An atomic operation exchanging a cache-line would
> > > be expected to be within the irqoff+spinlock+spinunlock+irqon overhead.
> > 
> > Let me rephrase the question...  Isn't the vast bulk of the overhead
> > something other than the runqueue spinlocks?
> 
> I don't think so. What we have here is:
> 
> O(1)
> - a system call
> - cpumask allocation
> - IPI broadcast

> O(nr cpus)

Isn't this really O(tasks) ?

Don't you do the spinlock(task_rq(task)->rq->lock)?

So the scale is not with large boxes, but the number of tasks that must
be checked. Still, if you have 1000 threads, a rcu writer is bound to
take a bit of overhead. But the advantage is the readers are still fast.

RCU is known to be slow for writing. A user must be aware of this.

Then we should have O(tasks) for spinlocks taken, and 
O(min(tasks, CPUS)) for IPIs.

cpumask = 0;
foreach task {
	spin_lock(task_rq(task)->rq->lock);
	if (task_rq(task)->curr == task)
		cpu_set(task_cpu(task), cpumask);
	spin_unlock(task_rq(task)->rq->lock);
}
send_ipi(cpumask);

-- Steve


> - wait for IPI handlers to complete
> - runqueue spinlocks


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists