lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51DD5BFC.8000102@linux.intel.com>
Date:	Wed, 10 Jul 2013 06:05:00 -0700
From:	Arjan van de Ven <arjan@...ux.intel.com>
To:	Morten Rasmussen <morten.rasmussen@....com>
CC:	"mingo@...nel.org" <mingo@...nel.org>,
	"peterz@...radead.org" <peterz@...radead.org>,
	"vincent.guittot@...aro.org" <vincent.guittot@...aro.org>,
	"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
	"alex.shi@...el.com" <alex.shi@...el.com>,
	"efault@....de" <efault@....de>, "pjt@...gle.com" <pjt@...gle.com>,
	"len.brown@...el.com" <len.brown@...el.com>,
	"corbet@....net" <corbet@....net>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	Catalin Marinas <Catalin.Marinas@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>
Subject: Re: [RFC][PATCH 0/9] sched: Power scheduler design proposal


>
>>
>> also, it almost looks like there is a fundamental assumption in the code
>> that you can get the current effective P state to make scheduler decisions on;
>> on Intel at least that is basically impossible... and getting more so with every generation
>> (likewise for AMD afaics)
>>
>> (you can get what you ran at on average over some time in the past, but not
>> what you're at now or going forward)
>>
>
> As described above, it is not a strict assumption. From a scheduler
> point of view we somehow need to know if the cpus are truly fully
> utilized (at their highest P-state)

unfortunately we can't provide this on Intel ;-(
we can provide you what you ran at average, we cannot provide you if that is the max or not

(first of all, because we outright don't know what the max would have been, and second,
because we may be running slower than max because the workload was memory bound or
any of the other conditions that makes the HW P state "governor" decide to reduce
frequency for efficiency reasons)

> so we need to throw more cpus at the
> problem (assuming that we have more than one task per cpu) or if we can
> just go to a higher P-state. We don't need a strict guarantee that we
> get exactly the P-state that we request for each cpu. The power
> scheduler generates hints and the power driver gives us feedback on what
> we can roughly expect to get.


>
>> I'm rather nervous about calculating how many cores you want active as a core scheduler feature.
>> I understand that for your big.LITTLE architecture you need this due to the asymmetry,
>> but as a general rule for more symmetric systems it's known to be suboptimal by quite a
>> real percentage. For a normal Intel single CPU system it's sort of the worst case you can do
>> in that it leads to serializing tasks that could have run in parallel over multiple cores/threads.
>> So at minimum this kind of logic must be enabled/disabled based on architecture decisions.
>
> Packing clearly has to take power topology into account and do the right
> thing for the particular platform. It is not in place yet, but will be
> addressed. I believe it would make sense for dual cpu Intel systems to
> pack at socket level?

a little bit. if you have 2 quad core systems, it will make sense to pack 2 tasks
onto a single core, assuming they are not cache or memory bandwidth bound (remember this is numa!)
but if you have 4 tasks, it's not likely to be worth it to pack, unless you get an enormous
economy of scale due to cache sharing
(this is far more about getting numa balancing right than about power; you're not very likely
to win back the power you loose from inefficiency if you get the numa side wrong by being
too smart about power placement)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ