lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Aug 2016 16:30:04 +0100
From:	Morten Rasmussen <morten.rasmussen@....com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	mingo@...hat.com, dietmar.eggemann@....com, yuyang.du@...el.com,
	vincent.guittot@...aro.org, mgalbraith@...e.de,
	sgurrappadi@...dia.com, freedom.tan@...iatek.com,
	keita.kobayashi.ym@...esas.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 09/13] sched/fair: Let asymmetric cpu configurations
 balance at wake-up

On Mon, Aug 15, 2016 at 05:10:06PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 15, 2016 at 04:01:34PM +0100, Morten Rasmussen wrote:
> > On Mon, Aug 15, 2016 at 03:39:49PM +0200, Peter Zijlstra wrote:
> > > > +static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
> > > > +{
> > > > +	long min_cap, max_cap;
> > > > +
> > > > +	min_cap = min(capacity_orig_of(prev_cpu), capacity_orig_of(cpu));
> > > > +	max_cap = cpu_rq(cpu)->rd->max_cpu_capacity;
> > > 
> > > There's a tiny hole here, which I'm fairly sure we don't care about. If
> > > @p last ran on @prev_cpu before @prev_cpu was split from @rd this
> > > doesn't 'work' right.
> > 
> > I hadn't considered that. What is 'working right' in this scenario?
> > Ignoring @prev_cpu as it isn't a valid option anymore?
> 
> Probably, yeah.
> 
> > In that case, since @prev_cpu is only used as part the min() it should
> > only cause min_cap to be potentially smaller than it should be, not
> > larger. It could lead us to let BALANCE_WAKE take over in scenarios
> > where select_idle_sibling() would have been sufficient, but it should
> > harm.
> 
> +not, right?

Yes :)

> 
> > However, as you say, I'm not sure if we care that much.
> 
> Yeah, don't think so, its extremely unlikely to happen, almost nobody
> mucks about with root_domains anyway. And those that do, do so once to
> setup things and then leave them be.
> 
> > Talking about @rd, I discussed with Juri and Dietmar the other week
> > whether the root_domain is RCU protected, and if we therefore have to
> > move the call to wake_cap() after the rcu_read_lock() below. I haven't
> > yet done thorough investigation to find the answer. Should it be
> > protected?
> 
> Yeah, I think either RCU or RCU-sched, I forever forget.

Okay. Should I send an updated version?

Powered by blists - more mailing lists