lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 16 May 2014 10:23:11 +0800 From: Michael wang <wangyun@...ux.vnet.ibm.com> To: Peter Zijlstra <peterz@...radead.org> CC: Rik van Riel <riel@...hat.com>, LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>, Alex Shi <alex.shi@...aro.org>, Paul Turner <pjt@...gle.com>, Mel Gorman <mgorman@...e.de>, Daniel Lezcano <daniel.lezcano@...aro.org> Subject: Re: [ISSUE] sched/cgroup: Does cpu-cgroup still works fine nowadays? On 05/15/2014 07:57 PM, Peter Zijlstra wrote: [snip] >> >> It's like: >> >> /cgroup/cpu/l1/l2/l3/l4/l5/l6/A >> >> about level 7, the issue can not be solved any more. > > That's pretty retarded and yeah, that's way past the point where things > make sense. You might be lucky and have l1-5 as empty/pointless > hierarchy so the effective depth is less and then things will work, but > *shees*.. Exactly, that's the simulation of cgroup topology setup by libvirt, really doesn't make sense... rather torture than deployment, but they do make things like that... > [snip] >> I'm not sure which account will turns to be huge when group get deeper, >> the load accumulation will suffer discount when passing up, isn't it? >> > > It'll use 20 bits for precision instead of 10, so it gives a little more > 'room' for deeper hierarchies/big cpu-count. Got it :) > > All assuming you're running 64bit kernels of course. Yes, it's 64bit, I tried the testing with this feature on, seems like haven't address the issue... But we found that one difference when group get deeper is the tasks of that group become to gathered on CPU more often, some time all the dbench instances was running on the same CPU, this won't happen for l1 group, may could explain why dbench could not get CPU more than 100% any more. But why the gather happen when group get deeper is unclear... will try to make it out :) Regards, Michael Wang > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists