[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1186425063.11797.80.camel@lappy>
Date: Mon, 06 Aug 2007 20:31:02 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Daniel Phillips <phillips@...nq.net>
Cc: Christoph Lameter <clameter@....com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Matt Mackall <mpm@...enic.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Steve Dickson <SteveD@...hat.com>
Subject: Re: [PATCH 02/10] mm: system wide ALLOC_NO_WATERMARK
On Mon, 2007-08-06 at 11:21 -0700, Daniel Phillips wrote:
> On Monday 06 August 2007 11:11, Christoph Lameter wrote:
> > On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> > > Change ALLOC_NO_WATERMARK page allocation such that dipping into
> > > the reserves becomes a system wide event.
> >
> > Shudder. That can just be a desaster for NUMA. Both performance wise
> > and logic wise. One cpuset being low on memory should not affect
> > applications in other cpusets.
Do note that these are only PF_MEMALLOC allocations that will break the
cpuset. And one can argue that these are not application allocation but
system allocations.
> Currently your system likely would have died here, so ending up with a
> reserve page temporarily on the wrong node is already an improvement.
>
> I agree that the reserve pool should be per-node in the end, but I do
> not think that serves the interest of simplifying the initial patch
> set. How about a numa performance patch that adds onto the end of
> Peter's series?
Trouble with keeping this per node is that all the code dealing with the
reserve needs to keep per-cpu state, which given that the system is
really crawling at that moment, seems excessive.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists