[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20061106124257.deffa31c.akpm@osdl.org>
Date: Mon, 6 Nov 2006 12:42:57 -0800
From: Andrew Morton <akpm@...l.org>
To: Christoph Lameter <clameter@....com>
Cc: linux-kernel@...r.kernel.org
Subject: Re: Avoid allocating during interleave from almost full nodes
On Mon, 6 Nov 2006 12:31:36 -0800 (PST)
Christoph Lameter <clameter@....com> wrote:
> On Mon, 6 Nov 2006, Andrew Morton wrote:
>
> > I'm referring to the metadata rather than to the pages themselves: the zone
> > structure at least. I bet there are a couple of cache misses in there.
>
> Yes, in particular in large systems.
>
> > > The number
> > > of pages to take will vary depending on the size of the shared data. For
> > > shared data areas that are just a couple of pages this wont work.
> >
> > What is "shared data"?
>
> Interleave is used for data accessed from many nodes otherwise one would
> prefer to allocate from the current zone. The shared data may be very
> frequently accessed from multiple nodes and one would like different NUMA
> nodes to respond to these requests.
But what is "shared data"?? You're using a new but very general term
without defining it.
> > > > Umm, but that's exactly what the patch we're discussing will do.
> > > Not if we have a set of remaining nodes.
> >
> > Yes it is. You're proposing taking an arbitrarily large number of
> > successive pages from the same node rather than interleaving the allocations.
> > That will create "hotspots or imbalances" (whatever they are).
>
> No I proposed to go round robin over the remaining nodes. The special case
> of one node left could be dealt with.
OK, but if two nodes have a lot of free pages and the rest don't then
interleave will consume those free pages without performing any reclaim
from all the other nodes. Hence hostpots or imbalances.
Whatever they are. Why does it matter?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists