lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Jan 2010 12:01:09 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, josh@...htriplett.org,
	tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
	barrier (v5)

* Peter Zijlstra (peterz@...radead.org) wrote:
> On Thu, 2010-01-21 at 11:07 -0500, Mathieu Desnoyers wrote:
> > 
> > One efficient way to fit the requirement of sys_membarrier() would be to
> > create spin_lock_mb()/spin_unlock_mb(), which would have full memory
> > barriers rather than the acquire/release semantic. These could be used
> > within schedule() execution. On UP, they would turn into preempt off/on
> > and a compiler barrier, just like normal spin locks.
> > 
> > On architectures like x86, the atomic instructions already imply a full
> > memory barrier, so we have a direct mapping and no overhead. On
> > architecture where the spin lock only provides acquire semantic (e.g.
> > powerpc using lwsync and isync), then we would have to create an
> > alternate implementation with "sync". 
> 
> There's also clear_tsk_need_resched() which is an atomic op.

But clear_bit() only acts as a full memory barrier on x86 due to the
lock-prefix side-effect.

Ideally, if we add some kind of synchronization, it would be good to
piggy-back on spin lock/unlock, because these already identify
synchronization points (acquire/release semantic). It also surrounds the
scheduler execution. As we need memory barriers before and after the
data modification, this looks like a sane way to proceed: if data update
is protected by the spinlock, then we are sure that we have the matching
full memory barriers.

> 
> The thing I'm worrying about is not making schedule() more expensive for
> a relatively rare operation like sys_membarrier(), while at the same
> time trying to not make while (1) sys_membarrier() ruin your system.

Yep, I share your concern.

> 
> On x86 there is plenty that implies a full mb before rq->curr = next,
> the thing to figure out is what is generally the cheapest place to force
> one for other architectures.

Yep.

> 
> Not sure where that leaves us, since I'm not too familiar with !x86.
> 

As I proposed above, I think what we have to look for is: where do we
already have some weak memory barriers already required ? And then
upgrade these memory barriers to full memory barriers. The spinlock
approach is one possible solution.

The problem with piggy-backing on clear_flag/set_flag is that these
operations don't semantically imply memory barriers at all. So adding
an additional full mb() around these would be much more costly than
"upgrading" an already-existing barrier.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ