[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000013b04769cf2-b57b16c0-5af0-4e7e-a736-e0aa2d4e4e78-000000@email.amazonses.com>
Date: Thu, 15 Nov 2012 14:26:21 +0000
From: Christoph Lameter <cl@...ux.com>
To: Ingo Molnar <mingo@...nel.org>
cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Paul Turner <pjt@...gle.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 0/8] Announcement: Enhanced NUMA scheduling with adaptive
affinity
On Tue, 13 Nov 2012, Ingo Molnar wrote:
> > the pages over both nodes in use.
>
> I'd not go as far as to claim that to be a general rule: the
> correct placement depends on the system and workload specifics:
> how much memory is on each node, how many tasks run on each
> node, and whether the access patterns and working set of the
> tasks is symmetric amongst each other - which is not a given at
> all.
>
> Say consider a database server that executes small and large
> queries over a large, memory-shared database, and has worker
> tasks to clients, to serve each query. Depending on the nature
> of the queries, interleaving can easily be the wrong thing to
> do.
The interleaving of memory areas that have an equal amount of shared
accesses from multiple nodes is essential to limit the traffic on the
interconnect and get top performance.
I guess through that in a non HPC environment where you are not interested
in one specific load running at top speed varying contention on the
interconnect and memory busses are acceptable. But this means that HPC
loads cannot be auto tuned.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists