[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1358915494.5752.46.camel@marge.simpson.net>
Date: Wed, 23 Jan 2013 05:31:34 +0100
From: Mike Galbraith <bitbucket@...ine.de>
To: Michael Wang <wangyun@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com,
peterz@...radead.org, mingo@...nel.org, a.p.zijlstra@...llo.nl
Subject: Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()
On Wed, 2013-01-23 at 10:44 +0800, Michael Wang wrote:
> On 01/22/2013 10:41 PM, Mike Galbraith wrote:
> > On Tue, 2013-01-22 at 16:56 +0800, Michael Wang wrote:
> >
> >> What about this patch? May be the wrong map is the killer on balance
> >> path, should we check it? ;-)
> >
> > [ 1.232249] Brought up 40 CPUs
> > [ 1.236003] smpboot: Total of 40 processors activated (180873.90 BogoMIPS)
> > [ 1.244744] CPU0 attaching sched-domain:
> > [ 1.254131] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
> > [ 1.252010] domain 0: span 0,16 level SIBLING
> > [ 1.280001] groups: 0 (cpu_power = 589) 16 (cpu_power = 589)
> > [ 1.292540] domain 1: span 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38 level MC
> > [ 1.312001] groups: 0,16 (cpu_power = 1178) 2,18 (cpu_power = 1178) 4,20 (cpu_power = 1178) 6,22 (cpu_power = 1178) 8,24 (cpu_power = 1178)
> > 10,26 (cpu_power = 1178)12,28 (cpu_power = 1178)14,30 (cpu_power = 1178)32,36 (cpu_power = 1178)34,38 (cpu_power = 1178)
> > [ 1.368002] domain 2: span 0-39 level NUMA
> > [ 1.376001] groups: 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38 (cpu_power = 11780)
> > 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39 (cpu_power = 11780)
>
> Thanks for the testing, that's not all the output but just for cpu 0,
> correct?
Yeah, I presumed one was enough. You can have more if you like, there's
LOTS more where that came from (reboot is amazing with low speed serial
console -> high latency low bandwidth DSL conection;).
> > [ 1.412546] WYT: sbm of cpu 0
> > [ 1.416001] WYT: exec map
> > [ 1.424002] WYT: sd 6ce55000, idx 0, level 0, weight 2
> > [ 1.436001] WYT: sd 6ce74000, idx 1, level 1, weight 20
> > [ 1.448001] WYT: sd 6cef3000, idx 3, level 3, weight 40
> > [ 1.460001] WYT: fork map
> > [ 1.468001] WYT: sd 6ce55000, idx 0, level 0, weight 2
> > [ 1.480001] WYT: sd 6ce74000, idx 1, level 1, weight 20
>
> This is not by design... sd in idx 2 should point to level 1 sd if there
> is no level 2 sd, this part is broken...oh, how could level 3 sd be
> there with out level 2 created? strange...
>
> So with this map, the new balance path will no doubt broken, I think we
> got the reason, amazing ;-)
>
> Let's see how to fix it, hmm... need some study firstly.
Another thing that wants fixing: root can set flags for _existing_
domains any way he likes, but when he invokes godly powers to rebuild
domains, he gets what's hard coded, which is neither clever (godly
wrath;), nor wonderful for godly runtime path decisions.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists