[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1359388558.5783.171.camel@marge.simpson.net>
Date: Mon, 28 Jan 2013 16:55:58 +0100
From: Mike Galbraith <efault@....de>
To: Borislav Petkov <bp@...en8.de>
Cc: Alex Shi <alex.shi@...el.com>, torvalds@...ux-foundation.org,
mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
namhyung@...nel.org, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org
Subject: Re: [patch v4 0/18] sched: simplified fork, release load avg and
power awareness scheduling
On Mon, 2013-01-28 at 16:22 +0100, Borislav Petkov wrote:
> On Mon, Jan 28, 2013 at 12:40:46PM +0100, Mike Galbraith wrote:
> > > No no, that's not restricted to one node. It's just overloaded because
> > > I turned balancing off at the NODE domain level.
> >
> > Which shows only that I was multitasking, and in a rush. Boy was that
> > dumb. Hohum.
>
> Ok, let's take a step back and slow it down a bit so that people like me
> can understand it: you want to try it with disabled load balancing on
> the node level, AFAICT. But with that many tasks, perf will suck anyway,
> no? Unless you want to benchmark the numa-aware aspect and see whether
> load balancing on the node level feels differently, perf-wise?
The broken thought was, since it's not wakeup path, stop node balance..
but killing all of it killed FORK/EXEC balance, oops.
I think I'm done with this thing though. See mail I just sent. There
are better things to do than letting box jerk my chain endlessly ;-)
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists