lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1326107156.2442.59.camel@twins>
Date:	Mon, 09 Jan 2012 12:05:56 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Youquan Song <youquan.song@...el.com>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, tglx@...utronix.de,
	hpa@...or.com, akpm@...ux-foundation.org, stable@...r.kernel.org,
	suresh.b.siddha@...el.com, arjan@...ux.intel.com,
	len.brown@...el.com, anhua.xu@...el.com, chaohong.guo@...el.com,
	Youquan Song <youquan.song@...ux.intel.com>
Subject: Re: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken

On Mon, 2012-01-09 at 19:14 -0500, Youquan Song wrote:
> Fine, I will base your suggestion to develop another patch soon.
> 

> 
> @@ -3923,6 +3923,10 @@ static inline void update_sg_lb_stats(struct
> sched_domain *sd,
>                                                 SCHED_POWER_SCALE);
>         if (!sgs->group_capacity)
>                 sgs->group_capacity = fix_small_capacity(sd, group);
> +
> +       if (sched_smt_power_savings)
> +               sgs->group_capacity *= 2; 

Note, this has the hard-coded assumption you only have 2 threads per
core, which while true for intel, isn't true in general. I think you
meant to write *= group->group_weight or somesuch.

Also, you forgot to limit this to the SD_SHARE_CPUPOWER domain, you're
now doubling the capacity for all domains.

Furthermore, have a look at the SD_PREFER_SIBLING logic and make sure
you're not fighting that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ