[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1abgqu5rg.fsf@frodo.ebiederm.org>
Date: Wed, 09 Jul 2008 14:06:27 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Jeremy Fitzhardinge <jeremy@...p.org>
Cc: "H. Peter Anvin" <hpa@...or.com>,
Christoph Lameter <cl@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, Mike Travis <travis@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jack Steiner <steiner@....com>, linux-kernel@...r.kernel.org,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [RFC 00/15] x86_64: Optimize percpu accesses
Jeremy Fitzhardinge <jeremy@...p.org> writes:
> H. Peter Anvin wrote:
>> Thinking about this some more, I don't know if it would make sense to put the
>> x86-64 stack canary at the *end* of the percpu area, and otherwise use
>> negative offsets. That would make sure they were readily reachable from
>> %rip-based references from within the kernel text area.
>
> If we can move the canary then a whole pile of options open up. But the problem
> is that we can't.
But we can pick an arbitrary point where %gs points at.
Hmm. This whole thing is even sillier then I thought.
Why can't we access per cpu vars as:
%gs:(per_cpu__var - __per_cpu_start) ?
If we can subtract constants and allow the linker to perform that resolution
at link. A zero based per cpu segment becomes a moot issue.
We may need to change the definition of PERCPU in vmlinux.lds.h to
#define PERCPU(align) \
. = ALIGN(align); \
- __per_cpu_start = .; \
.data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) { \
+ __per_cpu_start = .; \
*(.data.percpu) \
*(.data.percpu.shared_aligned) \
+ __per_cpu_end = .; \
+ }
- } \
- __per_cpu_end = .;
So that the linker knows __per_cpu_start and __per_cpu_end are in the same section
but otherwise it sounds entirely reasonable. Just slightly trickier math at link
time.
Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists