[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200712021413.27612.rusty@rustcorp.com.au>
Date: Sun, 2 Dec 2007 14:13:27 +1100
From: Rusty Russell <rusty@...tcorp.com.au>
To: Christoph Lameter <clameter@....com>
Cc: Andi Kleen <ak@...e.de>, Jeremy Fitzhardinge <jeremy@...p.org>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch 1/3] Percpu infrastructure to rebase the per cpu area to 0UL
On Friday 30 November 2007 17:43:06 Christoph Lameter wrote:
> Support an option
>
> CONFIG_PERCPU_ZERO_BASED
Minor nitpicks aside, I like this patch (using AT() was a nice touch: I tried
using section-relative relocs originally and it gave horrible results).
But this belongs in a completely separate patch, since it could well break
archs which don't have NUMA stuff set up yet:
> Index: linux-2.6.24-rc3-mm2/init/main.c
> ===================================================================
> --- linux-2.6.24-rc3-mm2.orig/init/main.c 2007-11-29 22:06:01.607326159
> -0800 +++ linux-2.6.24-rc3-mm2/init/main.c 2007-11-29 22:06:22.754826440
> -0800 @@ -370,18 +370,19 @@ EXPORT_SYMBOL(__per_cpu_offset);
>
> static void __init setup_per_cpu_areas(void)
> {
> - unsigned long size, i;
> - char *ptr;
> - unsigned long nr_possible_cpus = num_possible_cpus();
> + unsigned long size;
> + int cpu;
>
> /* Copy section for each CPU (we discard the original) */
> size = ALIGN(PERCPU_ENOUGH_ROOM, PAGE_SIZE);
> - ptr = alloc_bootmem_pages(size * nr_possible_cpus);
>
> - for_each_possible_cpu(i) {
> - __per_cpu_offset[i] = ptr - __per_cpu_start;
> - memcpy(ptr, __per_cpu_start, __per_cpu_end - __per_cpu_start);
> - ptr += size;
> + for_each_possible_cpu(cpu) {
> + char *ptr;
> +
> + ptr = alloc_bootmem_pages_node(NODE_DATA(cpu_to_node(cpu)),
> + size);
> + __per_cpu_offset[cpu] = ptr - __per_cpu_start;
> + memcpy(ptr, __per_cpu_load, __per_cpu_size);
> }
> }
Thanks!
Rusty.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists