lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 9 Jan 2010 20:01:04 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, josh@...htriplett.org,
	tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
	barrier

* Steven Rostedt (rostedt@...dmis.org) wrote:
> On Sat, 2010-01-09 at 14:20 -0500, Mathieu Desnoyers wrote:
> 
> > > > Using the spinlocks adds about 3s for 10,000,000 sys_membarrier() calls
> > > > or a 8-core system, for an added 300 ns/core per call.
> > > > 
> > > > So the overhead of taking the task lock is about twice higher, per core,
> > > > than the overhead of the IPIs. This is understandable if the
> > > > architecture does an IPI broadcast: the scalability problem then boils
> > > > down to exchange cache-lines to inform the ipi sender that the other
> > > > cpus have completed. An atomic operation exchanging a cache-line would
> > > > be expected to be within the irqoff+spinlock+spinunlock+irqon overhead.
> > > 
> > > Let me rephrase the question...  Isn't the vast bulk of the overhead
> > > something other than the runqueue spinlocks?
> > 
> > I don't think so. What we have here is:
> > 
> > O(1)
> > - a system call
> > - cpumask allocation
> > - IPI broadcast
> 
> > O(nr cpus)
> 
> Isn't this really O(tasks) ?

Yes, you are right. The iteration is done with:

for_each_cpu(cpu, mm_cpumask(current->mm))

which is bounded by the number of threads in the process.

> 
> Don't you do the spinlock(task_rq(task)->rq->lock)?

Within this loop, I check with cpu_curr(cpu)->mm

So, really, it's O(min(nr threads, nr cpus)), which could be translated
into O(nr active threads).

> 
> So the scale is not with large boxes, but the number of tasks that must
> be checked. Still, if you have 1000 threads, a rcu writer is bound to
> take a bit of overhead. But the advantage is the readers are still fast.

Yep.

> 
> RCU is known to be slow for writing. A user must be aware of this.

True. Although the goal of this modification is to ensure that
synchronize_rcu() is not painfully slow and does not involve waking up
all threads, which would have many side-effects on the system (killing
sleep states and so on).

> 
> Then we should have O(tasks) for spinlocks taken, and 
> O(min(tasks, CPUS)) for IPIs.

We actually have O(nr active threads) for both spinlocks taken and IPI
wait, which is not that bad.

You're starting to convince me to start with something rock-solid, and
wait until there is a need for something faster before we do tighter
coupling with the scheduler memory barriers.

Thanks,

Mathieu

> 
> cpumask = 0;
> foreach task {
> 	spin_lock(task_rq(task)->rq->lock);
> 	if (task_rq(task)->curr == task)
> 		cpu_set(task_cpu(task), cpumask);
> 	spin_unlock(task_rq(task)->rq->lock);
> }
> send_ipi(cpumask);
> 
> -- Steve
> 
> 
> > - wait for IPI handlers to complete
> > - runqueue spinlocks
> 
> 

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ