[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1179348898.2912.57.camel@lappy>
Date: Wed, 16 May 2007 22:54:58 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Christoph Lameter <clameter@....com>
Cc: Matt Mackall <mpm@...enic.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Thomas Graf <tgraf@...g.ch>,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH 0/5] make slab gfp fair
On Wed, 2007-05-16 at 13:44 -0700, Christoph Lameter wrote:
> On Wed, 16 May 2007, Peter Zijlstra wrote:
>
> > > How does all of this interact with
> > >
> > > 1. cpusets
> > >
> > > 2. dma allocations and highmem?
> > >
> > > 3. Containers?
> >
> > Much like the normal kmem_cache would do; I'm not changing any of the
> > page allocation semantics.
>
> So if we run out of memory on a cpuset then network I/O will still fail?
>
> I do not see any distinction between DMA and regular memory. If we need
> DMA memory to complete the transaction then this wont work?
If network relies on slabs that are cpuset constrained and the page
allocator reserves do not match that, then yes, it goes bang.
> > But its wanted to try the normal cpu_slab path first to detect that the
> > situation has subsided and we can resume normal operation.
>
> Is there some indicator somewhere that indicates that we are in trouble? I
> just see the ranks.
Yes, and page->rank will only ever be 0 if the page was allocated with
ALLOC_NO_WATERMARKS, and that only ever happens if we're in dire
straights and entitled to it.
Otherwise it'll be ALLOC_WMARK_MIN or somesuch.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists