[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070321163635.82697f08.dada1@cosmosbay.com>
Date: Wed, 21 Mar 2007 16:36:35 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Christoph Lameter <christoph@...eter.com>
Cc: Andi Kleen <andi@...stfloor.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux kernel <linux-kernel@...r.kernel.org>
Subject: Re: [RFC, PATCH] SLAB : [NUMA] keep nodeid in struct page instead
of struct slab
On Wed, 21 Mar 2007 07:41:46 -0700 (PDT)
Christoph Lameter <christoph@...eter.com> wrote:
> On Wed, 21 Mar 2007, Eric Dumazet wrote:
>
> > In order to avoid a cache miss in kmem_cache_free() on NUMA and reduce hot path length, we could exploit the following common facts.
> >
> > 1) MAX_NUMNODES <= 64
>
> MAX_NUMNODES == 1024 on IA64 and may be on x86_64 in the future.
So what ? I am very happy for you Christoph.
Average linux machines (ie more than 99.99 % on this planet) are using MAX_NUMNODES <= 64
If you read my patch, your lovely 1024 nodes machines will run as before : No change at all for big machines.
>
> > 2) alignment of 'struct kmem_cache *' can be >= 64
> >
> > The following patch changes the page->lru.next to contain not only the
> > 'struct kmem_cache *' pointer, but also the nodeid in the low order
> > bits.
>
> Nack.
>
> I would suggest you audit slab and make sure that we never fall back and
> always enter a slab on the node of page_to_nid(page). If
> all allocations use GFP_THISNODE then you should be safe and then we can
> remove the nodeid from the slab structure and simply use the available
> node information in the page struct.
>
Last time I checked 'struct page', they was no nodeid in it.
I cooked this patch because you told me it was possible to have a 'slab nodeid' different than page_to_nid(obj). Now you want me to prove the contrary. Me confused.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists