[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1179350433.2912.66.camel@lappy>
Date: Wed, 16 May 2007 23:20:33 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Christoph Lameter <clameter@....com>
Cc: Matt Mackall <mpm@...enic.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Thomas Graf <tgraf@...g.ch>,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH 0/5] make slab gfp fair
On Wed, 2007-05-16 at 14:13 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
> > > How we know that we are out of trouble? Just try another alloc and see? If
> > > that is the case then we may be failing allocations after the memory
> > > situation has cleared up.
> > No, no, for each regular allocation we retry to populate ->cpu_slab with
> > a new slab. If that works we're out of the woods and the ->reserve_slab
> > is cleaned up.
>
> Hmmm.. so we could simplify the scheme by storing the last rank
> somewheres.
Not sure how that would help..
> If the alloc has less priority and we can extend the slab then
> clear up the situation.
>
> If we cannot extend the slab then the alloc must fail.
That is exactly what is done; and as mpm remarked the other day, its a
binary system; we don't need full gfp fairness just ALLOC_NO_WATERMARKS.
And that is already found in ->reserve_slab; if present the last
allocation needed it; if not the last allocation was good.
> Could you put the rank into the page flags? On 64 bit at least there
> should be enough space.
Current I stick the newly allocated page's rank in page->rank (yet
another overload of page->index). I've not yet seen the need to keep it
around longer.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists