[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1311170893.2338.29.camel@edumazet-HP-Compaq-6005-Pro-SFF-PC>
Date: Wed, 20 Jul 2011 16:08:13 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Christoph Lameter <cl@...ux.com>
Cc: Mel Gorman <mgorman@...e.de>, Pekka Enberg <penberg@...nel.org>,
Konstantin Khlebnikov <khlebnikov@...nvz.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Matt Mackall <mpm@...enic.com>
Subject: Re: [PATCH] mm-slab: allocate kmem_cache with __GFP_REPEAT
Le mercredi 20 juillet 2011 à 08:56 -0500, Christoph Lameter a écrit :
> On Wed, 20 Jul 2011, Mel Gorman wrote:
>
> > > The changelog isn't that convincing, really. This is
> > > kmem_cache_create() so I'm surprised we'd ever get NULL here in
> > > practice. Does this fix some problem you're seeing? If this is
> > > really an issue, I'd blame the page allocator as GFP_KERNEL should
> > > just work.
> > >
> >
> > Besides, is allocating from cache_cache really a
> > PAGE_ALLOC_COSTLY_ORDER allocation? On my laptop at least, it's an
> > order-2 allocation which is supporting up to 512 CPUs and 512 nodes.
>
> Slab's kmem_cache is configured with an array of NR_CPUS which is the
> maximum nr of cpus supported. Some distros support 4096 cpus in order to
> accomodate SGI machines. That array then will have the size of 4096 * 8 =
> 32k
We currently support a dynamic schem for the possible nodes :
cache_cache.buffer_size = offsetof(struct kmem_cache, nodelists) +
nr_node_ids * sizeof(struct kmem_list3 *);
We could have a similar trick to make the real size both depends on
nr_node_ids and nr_cpu_ids.
(struct kmem_cache)->array would become a pointer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists