lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 01 Feb 2010 10:41:39 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
	linux-kernel@...r.kernel.org,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Nicholas Miell <nmiell@...cast.net>, laijs@...fujitsu.com,
	dipankar@...ibm.com, josh@...htriplett.org, dvhltc@...ibm.com,
	niv@...ibm.com, tglx@...utronix.de, peterz@...radead.org,
	Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [patch 1/3] Create spin lock/spin unlock with distinct memory
 barrier

On Mon, 2010-02-01 at 07:22 -0800, Linus Torvalds wrote:
> 

> If you need other smp barriers at the lock, then what about the non-locked 
> accesses _while_ the lock is held? You get no ordering guarantees there. 
> The whole thing sounds highly dubious. 

The issues is not about protecting data, it was all about ordering an
update of a variable (mm_cpumask) with respect to scheduling. The lock
was just a convenient place to add this protection. The memory barriers
here would allow the syscall to use memory barriers instead of locks.

> 
> And all of this for something that is a new system call that nobody 
> actually uses? To optimize the new and experimental path with some insane 
> lockfree model, while making the core kernel more complex?  A _very_ 
> strong NAK from me.

I totally agree with this. The updates here were from the fear of
grabbing all rq spinlocks (one at a time) called by a syscall would open
up a DoS (or as Nick said RoS - Reduction of Service). If someone called
this syscall within a while(1) loop on some large # CPU box, it could
cause cache thrashing.

But this is all being paranoid, and not worth the complexity in the core
scheduler. We don't even know if this fear is founded.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ