lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140107204951.GD2480@laptop.programming.kicks-ass.net>
Date:	Tue, 7 Jan 2014 21:49:51 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Morten Rasmussen <morten.rasmussen@....com>
Cc:	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <Dietmar.Eggemann@....com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"mingo@...nel.org" <mingo@...nel.org>,
	"pjt@...gle.com" <pjt@...gle.com>,
	"cmetcalf@...era.com" <cmetcalf@...era.com>,
	"tony.luck@...el.com" <tony.luck@...el.com>,
	"alex.shi@...aro.org" <alex.shi@...aro.org>,
	"preeti@...ux.vnet.ibm.com" <preeti@...ux.vnet.ibm.com>,
	"linaro-kernel@...ts.linaro.org" <linaro-kernel@...ts.linaro.org>,
	"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
	"corbet@....net" <corbet@....net>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"len.brown@...el.com" <len.brown@...el.com>,
	"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
	"amit.kucheria@...aro.org" <amit.kucheria@...aro.org>,
	"james.hogan@...tec.com" <james.hogan@...tec.com>,
	"schwidefsky@...ibm.com" <schwidefsky@...ibm.com>,
	"heiko.carstens@...ibm.com" <heiko.carstens@...ibm.com>
Subject: Re: [RFC] sched: CPU topology try

On Tue, Jan 07, 2014 at 03:41:54PM +0000, Morten Rasmussen wrote:
> I think that could work if we sort of the priority scaling issue that I
> mentioned before.

We talked a bit about this on IRC a month or so ago, right? My memories
from that are that your main complaint is that we don't detect the
overload scenario right.

That is; the point at which we should start caring about SMP-nice is
when all our CPUs are fully occupied, because up to that point we're
under utilized and work preservation mandates we utilize idle time.

Currently we detect overload by sg.nr_running >= sg.capacity, which can
be very misleading because while a cpu might have a task running 'now'
it might be 99% idle.

At which point I argued we should change the capacity thing anyhow. Ever
since the runnable_avg patch set I've been arguing to change that into
an actual utilization test.

So I think that if we measure overload by something like >95% utilization
on the entire group the load scaling again makes perfect sense.

Given the 3 task {A,B,C} workload where A and B are niced, to land on a
symmetric dual CPU system like: {A,B}+{C}, assuming they're all while(1)
loops :-).

The harder case is where all 3 tasks are of equal weight; in which case
fairness would mandate we (slowly) rotate the tasks such that they all
get 2/3 time -- we also horribly fail at this :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ