lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 May 2008 09:28:46 -0600
From:	Matthew Wilcox <matthew@....cx>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Sven Wegener <sven.wegener@...aler.net>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Andi Kleen <andi@...stfloor.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Alexander Viro <viro@....linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [git pull] scheduler fixes

On Tue, May 13, 2008 at 04:42:07PM +0200, Ingo Molnar wrote:
> yes, but even for parallel wakeups for completions it's good in general 
> to keep more tasks in flight than to keep less tasks in flight.

That might be the case for some users, but it isn't the case for XFS.
The first thing that each task does is grab a spinlock, so if you put as
much in flight as early as possible, you end up with horrible contention
on that spinlock.  I have no idea whether this is the common case for
multi-valued semaphores or not, it's just the only one I have data for.

> > So the only thing worth talking about (and indeed, it's now entirely 
> > moot) is what's the best way to solve this problem /for this kind of 
> > semaphore/.
> 
> it's not really moot in terms of improving the completions code i 
> suspect? For XFS i guess performance matters.

I think the completion code is less optimised than the semaphore code
today.  Clearly the same question does arise, but I don't know what the
answer is for completion users either.

Let's do a quick survey.  drivers/net has 5 users:

3c527.c -- execution_cmd has a mutex held, so never more than one task
waiting anyway.  xceiver_cmd is called during open and close which I think
are serialised at a higher level.  In any case, no performance issue here.

iseries_veth.c -- grabs a spinlock soon after being woken.

plip.c -- called in close, no perf implication.

ppp_synctty.c -- called in close, no perf implication.

ps3_gelic_wireless.c - If this isn't serialised, it's buggy.


Maybe drivers/net is a bad example.  Let's look at */*.c:

as-iosched.c -- in exit path.
blk-barrier.c -- completion on stack, so only one waiter.
blk-exec.c -- ditto
cfq-iosched.c -- in exit path

crypto/api.c -- in init path
crypto/gcm.c -- in setkey path
crypto/tcrypt.c -- crypto testing.  Not a perf path.

fs/exec.c -- waiting for coredumps.
kernel/exit.c -- likewise
kernel/fork.c -- completion on stack
kernel/kmod.c -- completion on stack 
kernel/kthread.c -- kthread creation and deletion.  Shouldn't be a hot
path, plus this looks like there's only going to be one task waiting.
kernel/rcupdate.c -- one completion on stack, one synchronised by a mutex
kernel/rcutorture.c -- doesn't matter
kernel/sched.c -- both completions on stack
kernel/stop_machine.c -- completion on stack
kernel/sysctl.c -- completion on stack
kernel/workqueue.c -- completion on stack

lib/klist.c -- This one seems like it could potentially have lots of
waiters, if only anything actually used klists.

It seems like most users use completions where it'd be just as easy
to use a task pointer and call wake_up_task().  In any case, I think
there's no evidence one way or the other about how people are using
multi-sleeper completions.

-- 
Intel are signing my paycheques ... these opinions are still mine
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ