lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 11 Jan 2012 12:37:44 -0500 From: Youquan Song <youquan.song@...el.com> To: Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com> Cc: Suresh Siddha <suresh.b.siddha@...el.com>, Youquan Song <youquan.song@...el.com>, Peter Zijlstra <a.p.zijlstra@...llo.nl>, linux-kernel@...r.kernel.org, mingo@...e.hu, tglx@...utronix.de, hpa@...or.com, akpm@...ux-foundation.org, stable@...r.kernel.org, arjan@...ux.intel.com, len.brown@...el.com, anhua.xu@...el.com, chaohong.guo@...el.com, Youquan Song <youquan.song@...ux.intel.com> Subject: Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken > I had tested sched_mc_powersavings=2 on dual socket quad core HT > nehalem. It worked as expected. Let me check the > sched_mc_powersavings=1 case. I will not be surprised if it is > broken. > In my opinion. When CPU cores > 8, sched_mc will have issue of course. What's more, when CPU cores increase it will be abviously if following current CPU power capability calcaluation logic. If CPU has 8 cores and if HT enabled, its power capability > 9 cores. Current logic, HT enable core power capability = 1178, 1178 * 8 / 1024 > 9.2. So scheduler will try to schedule >8 threads to meet its power capability before use other socket if sched_mc used. So it will be better, like this: Of course, It is only sample code and I do not try it yet. For MC: + if (sched_mc_power_savings) + sgs->group_capacity = group->group_weight/2; For SMT: + if (sched_smt_power_savings && !(sd->flags & SD_SHARE_CPUPOWER)) + sgs->group_capacity = group->group_weight; Thanks -Youquan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists