lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 May 2008 19:13:52 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Matthew Wilcox <matthew@....cx>
Cc:	Sven Wegener <sven.wegener@...aler.net>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Andi Kleen <andi@...stfloor.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Alexander Viro <viro@....linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [git pull] scheduler fixes


* Matthew Wilcox <matthew@....cx> wrote:

> > yes, but even for parallel wakeups for completions it's good in 
> > general to keep more tasks in flight than to keep less tasks in 
> > flight.
> 
> That might be the case for some users, but it isn't the case for XFS. 
> The first thing that each task does is grab a spinlock, so if you put 
> as much in flight as early as possible, you end up with horrible 
> contention on that spinlock. [...]

hm, this sounds like damage that is inflicted on itself by the XFS code. 

Why does it signal to its waiters that "resource is available", when in 
reality that resource is not available but immediately serialized via a 
lock? (even if the lock might technically be some _other_ object)

I have not looked closely at this but the more natural wakeup flow here 
would be that if you know there's going to be immediate contention, to 
signal a _single_ resource to a _single_ waiter, and then once that 
contention point is over a (hopefully) much more parallel processing 
phase occurs, to use a multi-value completion there.

in other words: dont tell the scheduler that there is parallelism in the 
system when in reality there is not. And for the same reason, do not 
throttle wakeups in a completion mechanism artificially because one 
given user utilizes it suboptimally. Once throttled it's not possible to 
regain that lost parallelism.

> [...] I have no idea whether this is the common case for multi-valued 
> semaphores or not, it's just the only one I have data for.

yeah. I'd guess XFS would be the primary user in this area who cares 
about performance.

> It seems like most users use completions where it'd be just as easy to 
> use a task pointer and call wake_up_task(). [...]

yeah - although i guess in general it's a bit safer to use an explicit 
completion. With a task pointer you have to be sure the task is still 
present, etc. (with a completion you are forced to put that completion 
object _somewhere_, which immediately forces one to think about lifetime 
issues. A wakeup to a single task pointer is way too easy to get wrong.)

So in general i'd recommend the use of completions.

> [...] In any case, I think there's no evidence one way or the other 
> about how people are using multi-sleeper completions.

yeah, that's definitely so.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ