[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090116131601.GA20593@elte.hu>
Date: Fri, 16 Jan 2009 14:16:01 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Tejun Heo <htejun@...il.com>
Cc: roel kluin <roel.kluin@...il.com>,
"H. Peter Anvin" <hpa@...or.com>, Brian Gerst <brgerst@...il.com>,
ebiederm@...ssion.com, cl@...ux-foundation.org,
rusty@...tcorp.com.au, travis@....com,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
steiner@....com, hugh@...itas.com
Subject: Re: [PATCH x86/percpu] x86_64: initialize this_cpu_off to
__per_cpu_load
* Tejun Heo <htejun@...il.com> wrote:
> On x86_64, if get_per_cpu_var() is used before per cpu area is setup
> (if lockdep is turned on, it happens), it needs this_cpu_off to point
> to __per_cpu_load. Initialize accordingly.
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> ---
> Okay, this fixes the problem. Tested with gcc-4.3.2 and binutils-2.19
> (openSUSE 11.1). For all four variants I can test (64-smp, 64-up,
> 32-smp, 32-up). This and the previous merge fix patch are available
> in the following git branch.
cool!
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git tj-percpu
>
> arch/x86/kernel/smpcommon.c | 5 +++++
> 1 files changed, 5 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/smpcommon.c b/arch/x86/kernel/smpcommon.c
> index 84395fa..7e15781 100644
> --- a/arch/x86/kernel/smpcommon.c
> +++ b/arch/x86/kernel/smpcommon.c
> @@ -3,8 +3,13 @@
> */
> #include <linux/module.h>
> #include <asm/smp.h>
> +#include <asm/sections.h>
>
> +#ifdef CONFIG_X86_64
> +DEFINE_PER_CPU(unsigned long, this_cpu_off) = (unsigned long)__per_cpu_load;
> +#else
> DEFINE_PER_CPU(unsigned long, this_cpu_off);
> +#endif
I've pulled your tree, but couldnt we do this symmetrically in the 32-bit
case too and avoid the ugly #ifdef somehow?
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists