[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20061104025128.ca3c9859.pj@sgi.com>
Date: Sat, 4 Nov 2006 02:51:28 -0800
From: Paul Jackson <pj@....com>
To: Andrew Morton <akpm@...l.org>
Cc: clameter@....com, linux-kernel@...r.kernel.org
Subject: Re: Avoid allocating during interleave from almost full nodes
Andrew wrote:
> Depends what it's doing. "number of pages allocated" would be a good
> "clock" to use in the VM. Or pages scanned. Or per-cpu-pages reloads.
> Something which adjusts to what's going on.
Christoph,
Do you know of any existing counters that we could use like this?
Adding a system wide count of pages allocated or scanned, just for
these fullnode hint caches, bothers me.
Sure, Andrew is right in the purist sense. The connection to any
wall clock time base for these events is tenuous at best.
But if the tradeoff is:
1) a new global counter on the pager allocator or scanning path,
2) versus an impure heuristic for zapping these full node hints,
then I can't justify the new counter. I work hard on this stuff to
keep any frequently written global data off hot code paths.
I just don't see any real world case where having a bogus time base for
these fullnode zaps actually hurts anyone. A global counter in the
main allocator or scanning code paths hurts everyone (well, everyone on
big NUMA boxes, anyhow ... ;).
It might not matter for this here interleave refinement patch (which has
other open questions), but it could at least (in theory) benefit my
zonelist caching patch to get a more reasonable trigger for zapping the
fullnode hint cache.
Even using an existing counter isn't "free." The more readers a
frequently updated warm cache line has, the hotter it gets.
Perhaps best if we used a node or cpu local counter.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists