lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150709232416.GI5197@intel.com>
Date:	Fri, 10 Jul 2015 07:24:16 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	Morten Rasmussen <morten.rasmussen@....com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Rabin Vincent <rabin.vincent@...s.com>,
	"mingo@...hat.com" <mingo@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>
Subject: Re: [PATCH?] Livelock in pick_next_task_fair() / idle_balance()

Hi,

On Thu, Jul 09, 2015 at 03:32:20PM +0100, Morten Rasmussen wrote:
> On Mon, Jul 06, 2015 at 06:31:44AM +0800, Yuyang Du wrote:
> > On Fri, Jul 03, 2015 at 06:38:31PM +0200, Peter Zijlstra wrote:
> > > > I'm not against having a policy that sits somewhere in between, we just
> > > > have to agree it is the right policy and clean up the load-balance code
> > > > such that the implemented policy is clear.
> > > 
> > > Right, for balancing its a tricky question, but mixing them without
> > > intent is, as you say, a bit of a mess.
> > > 
> > > So clearly blocked load doesn't make sense for (new)idle balancing. OTOH
> > > it does make some sense for the regular periodic balancing, because
> > > there we really do care mostly about the averages, esp. so when we're
> > > overloaded -- but there are issues there too.
> > > 
> > > Now we can't track them both (or rather we could, but overhead).
> > > 
> > > I like Yuyang's load tracking rewrite, but it changes exactly this part,
> > > and I'm not sure I understand the full ramifications of that yet.
> 
> I don't think anybody does ;-) But I think we should try to make it
> work.
> 
> > Thanks. It would be a pure average policy, which is non-perfect like now,
> > and certainly needs a mixing like now, but it is worth a starter, because
> > it is simple and reasaonble, and based on it, the other parts can be simple
> > and reasonable.
> 
> I think we all agree on the benefits of taking blocked load into
> account but also that there are some policy questions to be addressed.
> 
> > > One way out would be to split the load balancer into 3 distinct regions;
> > > 
> > >  1) get a task on every CPU, screw everything else.
> > >  2) get each CPU fully utilized, still ignoring 'load'
> > >  3) when everybody is fully utilized, consider load.
> 
> Seems very reasonable to me. We more or less follow that idea in the
> energy-model driven scheduling patches, at least 2) and 3).
> 
> The difficult bit is detecting when to transition between 2) and 3). If
> you want to enforce smp_nice you have to start worrying about task
> priority as soon as one cpu is fully utilized.
> 
> For example, a fully utilized cpu has two high priority tasks while all
> other cpus are running low priority tasks and are not fully utilized.
> The utilization imbalance may be too small to cause any tasks to be
> migrated, so we end up giving fewer cycles to the high priority tasks.
> 
> > > If we make find_busiest_foo() select one of these 3, and make
> > > calculate_imbalance() invariant to the metric passed in, and have things
> > > like cpu_load() and task_load() return different, but coherent, numbers
> > > depending on which region we're in, this almost sounds 'simple'.
> > > 
> > > The devil is in the details, and the balancer is a hairy nest of details
> > > which will make the above non-trivial.
> 
> Yes, but if we have an overall policy like the one you propose we can at
> least make it complicated and claim that we think we know what it is
> supposed to do ;-)
> 
> I agree that there is some work to be done in find_busiest_*() and
> calcuate_imbalance() + friends. Maybe step one should be to clean them
> up a bit.

Consensus looks like that we move step-by-step and start working right now:

1) Based on the "Rewrite" patch, let me add cfs_rq->runnable_load_avg. Then 
   we will have up-to-date everything: load.weight, runnable_load_avg, and
   load_avg (including runnable + blocked), from pure now to pure average.
   The runnable_load_avg will be used the same as now. So we will not have
   a shred of remification. As long as the code is cleared and simplified,
   it is a win.

2) Let's clean up a bit the load balancing part code-wise, and if needed,
   make change to the obvious things, otherwise leave it unchanged.

3) Polish/complicate the policies, :)

What do you think?

Thanks,
Yuyang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ