[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070514161224.GC11115@waste.org>
Date: Mon, 14 May 2007 11:12:24 -0500
From: Matt Mackall <mpm@...enic.com>
To: Christoph Lameter <clameter@....com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Thomas Graf <tgraf@...g.ch>,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH 0/5] make slab gfp fair
On Mon, May 14, 2007 at 08:53:21AM -0700, Christoph Lameter wrote:
> On Mon, 14 May 2007, Peter Zijlstra wrote:
>
> > In the interest of creating a reserve based allocator; we need to make the slab
> > allocator (*sigh*, all three) fair with respect to GFP flags.
>
> I am not sure what the point of all of this is.
>
> > That is, we need to protect memory from being used by easier gfp flags than it
> > was allocated with. If our reserve is placed below GFP_ATOMIC, we do not want a
> > GFP_KERNEL allocation to walk away with it - a scenario that is perfectly
> > possible with the current allocators.
>
> Why does this have to handled by the slab allocators at all? If you have
> free pages in the page allocator then the slab allocators will be able to
> use that reserve.
If I understand this correctly:
privileged thread unprivileged greedy process
kmem_cache_alloc(...)
adds new slab page from lowmem pool
do_io()
kmem_cache_alloc(...)
kmem_cache_alloc(...)
kmem_cache_alloc(...)
kmem_cache_alloc(...)
kmem_cache_alloc(...)
...
eats it all
kmem_cache_alloc(...) -> ENOMEM
who ate my donuts?!
But I think this solution is somehow overkill. If we only care about
this issue in the OOM avoidance case, then our rank reduces to a
boolean.
--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists