[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1172136508.6374.41.camel@twins>
Date: Thu, 22 Feb 2007 10:28:28 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
netdev@...r.kernel.org,
Trond Myklebust <trond.myklebust@....uio.no>,
Thomas Graf <tgraf@...g.ch>, David Miller <davem@...emloft.net>
Subject: Re: [PATCH 08/29] mm: kmem_cache_objs_to_pages()
On Wed, 2007-02-21 at 17:47 +0200, Pekka Enberg wrote:
> Hi Peter,
>
> On 2/21/07, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> > Provide a method to calculate the number of pages needed to store a given
> > number of slab objects (upper bound when considering possible partial and
> > free slabs).
>
> So how does this work? You ask the slab allocator how many pages you
> need for a given number of objects and then those pages are available
> to it via the page allocator? Can other users also dip into those
> reserves?
Everybody (ab)using PF_MEMALLOC or the new __GFP_EMERGENCY.
> I would prefer we simply have an API for telling the slab allocator to
> keep certain number of pages in a reserve for a cache rather than
> exposing internals such as object size to rest of the world.
Keeping the free pages in the page allocator is good for the buddy
system. Although you could probably implement a reserve interface
without actually claiming the pages.
However, doing it like so separates the making of the reserve from the
actual kmem_cache object, I can just carry a sum of pages around instead
of a list of kmem_cache pointers.
I calculate a potential reserve, I might never actually commit to making
(and using) the reserve.
Also, I don't see what internals are exposed, kmem_cache is still
private to slab.c.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists