[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130904154301.GA2975@sgi.com>
Date: Wed, 4 Sep 2013 10:43:01 -0500
From: Alex Thorlton <athorlton@....com>
To: Robin Holt <robinmholt@...il.com>
Cc: "Kirill A. Shutemov" <kirill@...temov.name>,
Dave Hansen <dave.hansen@...el.com>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
"Eric W . Biederman" <ebiederm@...ssion.com>,
Sedat Dilek <sedat.dilek@...il.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Dave Jones <davej@...hat.com>,
Michael Kerrisk <mtk.manpages@...il.com>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
David Howells <dhowells@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Al Viro <viro@...iv.linux.org.uk>,
Oleg Nesterov <oleg@...hat.com>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
Kees Cook <keescook@...omium.org>
Subject: Re: [PATCH 1/8] THP: Use real address for NUMA policy
On Tue, Aug 27, 2013 at 12:01:01PM -0500, Robin Holt wrote:
> Alex,
>
> Although the explanation seems plausible, have you verified this is
> actually possible? You could make a simple pthread test case which
> allocates a getpagesize() * <number-of-threads> area, prints its
> address and then each thread migrate and reference their page. Have
> the task then sleep(<long-time>) before exit. Look at the physical
> address space with dlook for those virtual addresses in both the THP
> and non-THP cases.
>
> Thanks,
> Robin
Robin,
I tweaked one of our other tests to behave pretty much exactly as I
described, and I can see a very significant increase in performance with
THP turned off. The test behaves as follows:
- malloc a large array
- Spawn a specified number of threads
- Have each thread touch small, evenly spaced chunks of the array (e.g.
for 128 threads, the array is divided into 128 chunks, and each thread
touches 1/128th of each chunk, dividing the array into 16,384 pieces)
With THP off, the majority of each thread's pages are node local. With
THP on, most of the pages end up as THPs on the first thread's nodes,
since it is touching chunks that are close enough together to be
collapsed into THPs which will, of course, remain on the first node for
the duration of the test.
Here are some timings for 128 threads, allocating a total of 64gb:
THP on:
real 1m6.394s
user 16m1.160s
sys 75m25.232s
THP off:
real 0m35.251s
user 26m37.316s
sys 3m28.472s
The performance hit here isn't as severe as shown with the SPEC workload
that we originally used, but it still appears to consistently take about
twice as long with THP enabled.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists