lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Sep 2013 18:16:53 +1000
From:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Paul Mackerras <paulus@....ibm.com>,
	Ingo Molnar <mingo@...nel.org>,
	James Hogan <james.hogan@...tec.com>,
	"James E.J. Bottomley" <jejb@...isc-linux.org>,
	Helge Deller <deller@....de>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	"David S. Miller" <davem@...emloft.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Anton Blanchard <anton@....ibm.com>
Subject: Re: [RFC GIT PULL] softirq: Consolidation and stack overrun fix

On Tue, 2013-09-24 at 10:04 +0200, Peter Zijlstra wrote:
> On Tue, Sep 24, 2013 at 11:52:07AM +1000, Benjamin Herrenschmidt wrote:
> > So if that holds, we have a solid way to do per-cpu. On one side, I tend
> > to think that r13 being task/thread/thread_info is probably a better
> > overall choice, I'm worried that going in a different direction than x86
> > means generic code will get "tuned" to use per-cpu for performance
> > critical stuff rather than task/thread/thread_info in inflexible ways.
> 
> The plus side of per-cpu over per-task is that one typically has a lot
> less cpus than tasks. Also, its far easier/cheaper to iterate cpus than
> it is to iterate tasks.

I don't see how that relates to the above though... ie, how the number
of CPUs vs. tasks and the relative ease of iteration relates to how fast
we access the "current" cpu and the "current" task ?

The thing is, if I use r13 as "current" (and put thread_info in the task
struct), virtually *all* my accesses to task, thread and thread_info
structs become a single load or store instruction from r13.

On the other hand, access to "my_cpu_offset" for per-cpu remains (since
that's what we do today already) a load to get the offset an an addition
+ access. (ie, I would stash the cpu offset into the thread info and
context switch it).

If I use r13 as "my_cpu_offset", then I might be able to skip a load for
per-cpu accesses to the current cpu, making them a bit faster, but add
an indirection for "current" and thread_info.

It's basically a question of where do I put the extra load and what has
the bigger net benefit. I tend to think we access "current" a LOT more
than per-cpu but I might be wrong.

Additionally, using r13 for "current" removes all the problems with gcc
copying the value accross preempt_disable/enable etc... while using it
as per-cpu means they remain. We think we have a fix (which will involve
a special preempt_barrier() added to preempt_disable/enable and irq
enable/disable), but it's still not as nice as "the problem just doesn't
exist" :-)

Cheers,
Ben.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ