lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 18 Nov 2008 13:30:43 +0100 From: Peter Zijlstra <a.p.zijlstra@...llo.nl> To: Ken Chen <kenchen@...gle.com> Cc: Chris Friesen <cfriesen@...tel.com>, Ingo Molnar <mingo@...e.hu>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org> Subject: Re: busted CFS group load balancer? On Mon, 2008-11-17 at 23:33 -0800, Ken Chen wrote: > On Mon, Nov 17, 2008 at 9:19 PM, Peter Zijlstra wrote: > > Note that with larger cpu count and/or lower group weight we'll quickly > > run into numerical trouble... > > > > I would recommend trying this with the minimum weight in the order of > > 8-16 times number of cpus on your system. > > > > There is only so much one can do with 10 bit fixed precision math :/ > > That is probably one of the many problems. I also found that the > updates to the per-cpu task_group's sched_entity load weight > (tg->se[cpu]->load.weight) is very problematic and very erratic. > > The total rq_weight is calculated at one beginning of tg_shares_up(), > > for_each_cpu_mask(i, sd->span) { > rq_weight += tg->cfs_rq[i]->load.weight; > shares += tg->cfs_rq[i]->shares; > } > > However, the scaling of per-cpu se->load.weight in function > __update_group_shares_cpu() takes another lookup of > tg->cfs_rq[cpu]->load.weight at a different time. > cfs_rq[cpu].load.weight aren't always consistent across these two > times. Due to these inconsistency of value taken on per cpu cfs_rq, > I've see tg->se[cpu]->load.weight jumping all over the place. In our > environment, the cpu loads are very dynamic. Process > queuing/dequeuing at high rate. Ok, if your load values are very unstable in the order of the load-balance interval then you're hosed too, the same is true for the normal smp load-balancer. The cgroup load-balancer makes that even more problematic. Again, there's just very little you can do about that, except increase the coupling between cpus and thereby increase the overhead. Try decreasing sysctl_sched_shares_ratelimit. > I'm also very troubled with this calculation in __update_group_shares_cpu(): > > shares = (sd_shares * rq_weight) / (sd_rq_weight + 1); > > Won't you have rounding problem here? value 'shares' will gradually > decrease for each iteration of __update_group_shares_cpu()? Yes it will, however at the top of the sched-domain tree its reset. if (!sd->parent || !(sd->parent->flags & SD_LOAD_BALANCE)) shares = tg->shares; -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists