[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090123151525.GL19986@wotan.suse.de>
Date: Fri, 23 Jan 2009 16:15:25 +0100
From: Nick Piggin <npiggin@...e.de>
To: Andi Kleen <andi@...stfloor.org>
Cc: Pekka Enberg <penberg@...helsinki.fi>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Lin Ming <ming.m.lin@...el.com>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
Subject: Re: [patch] SLQB slab allocator
On Fri, Jan 23, 2009 at 04:06:32PM +0100, Andi Kleen wrote:
> On Fri, Jan 23, 2009 at 03:27:53PM +0100, Nick Piggin wrote:
> >
> > > Although I think I would prefer alloc_percpu, possibly with
> > > per_cpu_ptr(first_cpu(node_to_cpumask(node)), ...)
> >
> > I don't think we have the NUMA information available early enough
> > to do that.
>
> How early? At mem_init time it should be there because bootmem needed
> it already. It meaning the architectural level NUMA information.
node_to_cpumask(0) returned 0 at kmem_cache_init time.
> > OK, but if it is _possible_ for the node to gain memory, then you
> > can't do that of course.
>
> In theory it could gain memory through memory hotplug.
Yes.
> > The cache_line_size() change wouldn't change slqb code significantly.
> > I have no problem with it, but I simply won't have time to do it and
> > test all architectures and get them merged and hold off merging
> > SLQB until they all get merged.
>
> I was mainly refering to the sysfs code here.
OK.
> > > Could you perhaps mark all the code you don't want to change?
> >
> > Primarily the debug code from SLUB.
>
> Ok so you could fix the sysfs code? @)
>
> Anyways, if you have such shared pieces perhaps it would be better
> if you just pull them all out into a separate file.
I'll see. I do plan to try making improvements to this peripheral
code but it just has to wait a little bit for other improvements
first.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists