[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120319141647.GN24602@redhat.com>
Date: Mon, 19 Mar 2012 15:16:47 +0100
From: Andrea Arcangeli <aarcange@...hat.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Avi Kivity <avi@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, Paul Turner <pjt@...gle.com>,
Suresh Siddha <suresh.b.siddha@...el.com>,
Mike Galbraith <efault@....de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
Dan Smith <danms@...ibm.com>,
Bharata B Rao <bharata.rao@...il.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC][PATCH 00/26] sched/numa
On Mon, Mar 19, 2012 at 02:26:34PM +0100, Peter Zijlstra wrote:
> So what about the case where all I do is compile kernels and we already
> have near perfect locality because everything is short running? You're
> still scanning that memory, and I get no benefit.
I could add an option to delay the scan and enable it only on long
lived "mm". In practice I measured the scanning cost and it was in the
unmeasurable range on host this is why I didn't yet, plus I tried to
avoid all special cases and to keep things as generic as possible so
treating everything the same. Maybe it's good idea, maybe not as it
delays more the time it takes to react to wrong memory layout.
If you stop knuma_scand with sysfs (echo 0 >...) the whole thing
eventually stops. It's like 3 gears, where first gear is knuma_scand,
second gear is the numa hinting page fault, the third gears are
knuma_migrated and CPU scheduler that gets driven.
So it's easy to benchmark the fixed cost.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists