lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090825085124.GF20811@alberich.amd.com>
Date:	Tue, 25 Aug 2009 10:51:24 +0200
From:	Andreas Herrmann <andreas.herrmann3@....com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Gautham Shenoy <ego@...ibm.com>,
	"svaidy@...ux.vnet.ibm.com" <svaidy@...ux.vnet.ibm.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [PATCH 11/15] sched: Pass unlimited __cpu_power information to
	upper domain level groups

On Mon, Aug 24, 2009 at 05:21:37PM +0200, Peter Zijlstra wrote:
> On Thu, 2009-08-20 at 15:41 +0200, Andreas Herrmann wrote:
> > For performance reasons __cpu_power in a sched_group might be limited
> > such that the group can handle only one task. To correctly calculate
> > the capacity in upper domain level groups the unlimited power
> > information is required. This patch stores unlimited __cpu_power
> > information in sched_groups.orig_power and uses this when calculating
> > __cpu_power in upper domain level groups.
> 
> OK, so this tries to fix the cpu_power wreckage?

Not completely. Just (partially) for my MN domain needs.

> ok, so let me try this with an example:
> 
> Suppose we have a dual-core with shared cache and SMT
> 
>   0-3     MC
> 0-1 2-3   SMT
> 
> Then both levels fancy setting SHARED_RESOURCES and both levels end up
> normalizing the cpu_power to 1, so when we unplug cpu 2, load-balancing
> gets all screwy because the whole system doesn't get normalized
> properly.

So normalization is broken already, right?

In case of sched_smt_power_savings we have 1024 as __cpu_power for
each SMT sched_group. And at MC level we have always 2048 as long as
we have two sched_groups in the SMT level.

> What you propose here is every time we muck with cpu_power we keep the
> real stuff in orig_power and use that to compute the level above.

Yes.

> Except you don't use it in the load-balancer proper, so normalization is
> still hosed.

Yes, the normalization problem that you've mentioned is not fixed by that.
But it might be advisable to fix it.

> Its a creative solution, but I'd rather see cpu_power returned to a
> straight sum of actual power to normalize the inter-cpu runqueue weights
> and do the placement decision using something else.

This means not to artificially restrict __cpu_power to 1024 for performance
scheduling?

Seconded.
But I don't have an impromptu patch for this. ;-(


Regards,
Andreas

-- 
Operating | Advanced Micro Devices GmbH
  System  | Karl-Hammerschmidt-Str. 34, 85609 Dornach b. München, Germany
 Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni
  Center  | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
  (OSRC)  | Registergericht München, HRB Nr. 43632


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ