[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0806121921260.4570@schroedinger.engr.sgi.com>
Date: Thu, 12 Jun 2008 19:27:07 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Rusty Russell <rusty@...tcorp.com.au>
cc: Nick Piggin <nickpiggin@...oo.com.au>,
Martin Peschke <mp3@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
David Miller <davem@...emloft.net>,
Eric Dumazet <dada1@...mosbay.com>,
Peter Zijlstra <peterz@...radead.org>,
Mike Travis <travis@....com>
Subject: Re: [patch 04/41] cpu ops: Core piece for generic atomic per cpu
operations
On Fri, 13 Jun 2008, Rusty Russell wrote:
> cpu_possible_map should definitely be minimal, but your point is well made:
> dynamic percpu could actually cut memory allocation. If we go for a hybrid
> scheme where static percpu is always allocated from the initial chunk,
> however, we still need the current pessimistic overallocation.
The initial chunk would mean that the percpu areas all come from the same
NUMA node. We really need to allocate from the node that is nearest to a
processor (not all processors have processor local memory!).
It would be good to standardize the way that percpu areas are allocated.
We have various ways of allocation now in various arches.
init/main.c:setup_per_cpu_ares() needs to be generalized:
1. Allocate the per cpu areas in a NUMA aware fashions.
2. Have a function for instantiating a single per cpu area that
can be used during cpu hotplug.
3. Some hooks for arches to override particular behavior as needed.
F.e. IA64 allocates percpu structures in a special way. x86_64
needs to do some tricks for the pda etc etc.
> Mike's a clever guy, I'm sure he'll think of something :)
Hopefully. Otherwise he will ask me =-).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists