[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45172AC8.2070701@goop.org>
Date: Sun, 24 Sep 2006 18:03:04 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Rusty Russell <rusty@...tcorp.com.au>
CC: Andi Kleen <ak@....de>,
lkml - Kernel Mailing List <linux-kernel@...r.kernel.org>,
virtualization <virtualization@...ts.osdl.org>
Subject: Re: [PATCH 5/7] Use %gs for per-cpu sections in kernel
Rusty Russell wrote:
>> So are symbols referencing the .data.percpu section 0-based? Wouldn't
>> you need to subtract __per_cpu_start from the symbols to get a 0-based
>> segment offset?
>>
>
> I don't think I understand the question.
>
> The .data.percpu section is the "template" per-cpu section, freed along
> with other initdata: after setup_percpu_areas() is called, it is not
> supposed to be used. Around that time, the gs segment is set up based
> at __per_cpu_offset[cpu], so "%gs:<varname>" accesses the local version.
>
If you do
DEFINE_PER_CPU(int, foo);
then this ends up defining per_cpu__foo in .data.percpu. Since
.data.percpu is part of the init data section, it starts at some address
X (not 0), so the real offset into the actual per-cpu memory is actually
(per_cpu__foo - __per_cpu_start). setup_per_cpu_areas() builds this
delta into the __per_cpu_offset[], and so it means that the base of your
%gs segment is at -__per_cpu_start from the actual start of the CPU's
per-cpu memory, and the limit has to be correspondingly larger. Which
is a bit ugly. Especially since "__per_cpu_start" is actually very
large, and so this scheme pretty much relies on being able to wrap
around the segment limit, and will be very bad for Xen.
An alternative is to put the "-__per_cpu_start" into the addressing mode
when constructing the address of the per-cpu variable.
J
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists