lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100121160729.GB12842@Krystal>
Date:	Thu, 21 Jan 2010 11:07:29 -0500
From:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>, linux-kernel@...r.kernel.org,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Oleg Nesterov <oleg@...hat.com>, Ingo Molnar <mingo@...e.hu>,
	akpm@...ux-foundation.org, josh@...htriplett.org,
	tglx@...utronix.de, Valdis.Kletnieks@...edu, dhowells@...hat.com,
	laijs@...fujitsu.com, dipankar@...ibm.com
Subject: Re: [RFC PATCH] introduce sys_membarrier(): process-wide memory
	barrier (v5)

* Peter Zijlstra (peterz@...radead.org) wrote:
> On Tue, 2010-01-19 at 20:06 +0100, Peter Zijlstra wrote:
> > 
> > We could possibly look at placing that assignment in context_switch()
> > between switch_mm() and switch_to(), which should provide a mb before
> > and after I think, Ingo?
> 
> Right, just found out why we cannot do that, the first thing
> context_switch() does is prepare_context_switch() which includes
> prepare_lock_switch() which on __ARCH_WANT_UNLOCKED_CTXSW machines drops
> the rq->lock, and we have to have rq->curr assigned by then.
> 

OK.

One efficient way to fit the requirement of sys_membarrier() would be to
create spin_lock_mb()/spin_unlock_mb(), which would have full memory
barriers rather than the acquire/release semantic. These could be used
within schedule() execution. On UP, they would turn into preempt off/on
and a compiler barrier, just like normal spin locks.

On architectures like x86, the atomic instructions already imply a full
memory barrier, so we have a direct mapping and no overhead. On
architecture where the spin lock only provides acquire semantic (e.g.
powerpc using lwsync and isync), then we would have to create an
alternate implementation with "sync".

We can even create a generic fallback with the following kind of code in
the meantime:

static inline void spin_lock_mb(spinlock_t *lock)
{
	spin_lock(&lock);
	smp_mb();
}

static inline void spin_unlock_mb(spinlock_t *lock)
{
	smp_mb();
	spin_unlock(&lock);
}

How does that sound ?

Mathieu


-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ