[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121122012122.GA7938@gmail.com>
Date: Thu, 22 Nov 2012 02:21:22 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Alex Shi <lkml.alex@...il.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Paul Turner <pjt@...gle.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Christoph Lameter <cl@...ux.com>,
Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: numa/core regressions fixed - more testers wanted
* Alex Shi <lkml.alex@...il.com> wrote:
> >
> > Those of you who would like to test all the latest patches are
> > welcome to pick up latest bits at tip:master:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
> >
>
> I am wondering if it is a problem, but it still exists on HEAD: c418de93e39891
> http://article.gmane.org/gmane.linux.kernel.mm/90131/match=compiled+with+name+pl+and+start+it+on+my
>
> like when just start 4 pl tasks, often 3 were running on node
> 0, and 1 was running on node 1. The old balance will average
> assign tasks to different node, different core.
This is "normal" in the sense that the current mainline
scheduler is (supposed to be) doing something similar: if the
node is still within capacity, then there's no reason to move
those threads.
OTOH, I think with NUMA balancing we indeed want to spread them
better, if those tasks do not share memory with each other but
use their own memory. If they share memory then they should
remain on the same node if possible.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists