[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0611060846070.25351@schroedinger.engr.sgi.com>
Date: Mon, 6 Nov 2006 08:53:22 -0800 (PST)
From: Christoph Lameter <clameter@....com>
To: Andrew Morton <akpm@...l.org>
cc: linux-kernel@...r.kernel.org
Subject: Re: Avoid allocating during interleave from almost full nodes
On Fri, 3 Nov 2006, Andrew Morton wrote:
> This has almost nothing to do with elapsed time.
>
> How about doing, in free_pages_bulk():
>
> if (zone->over_interleave_pages) {
> zone->over_interleave_pages = 0;
> node_clear(zone_to_nid(zone), full_interleave_nodes);
> }
Hmmm... We would also have to compare to the mininum pages
required before clearing the node. Isnt it a bit much to have two
comparisons added to the page free path?
> > It is needlessly expensive if its done for an allocation that is not bound
> > to a specific node and there are other nodes with free pages. We may throw
> > out pages that we need later.
>
> Well it grossly changes the meaning of "interleaving". We might as well
> call it something else. It's not necessarily worse, but it's not
> interleaved any more.
It is going from node to node unless there is significant imbalance with
some nodes being over the limit and some under. Then the allocations will
take place round robin from the nodes under the limit until all are under
the limit. Then we continue going over all nodes again.
> Actually by staying on the same node for a string of successive allocations
> it could well be quicker. How come MPOL_INTERLEAVE doesn't already do some
> batching? Or does it, and I missed it?
It should do interleaving because the data is to be accessed from multiple
nodes. Clustering on a single node may create hotspots or imbalances.
Hmmm... We should check how many nodes are remaining if there is just a
single node left then we need to ignore the limit.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists