lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1331202480.11248.375.camel@twins>
Date:	Thu, 08 Mar 2012 11:28:00 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Roland Dreier <roland@...nel.org>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Paul Turner <pjt@...gle.com>,
	Venkatesh Pallipadi <venki@...gle.com>,
	Suresh Siddha <suresh.b.siddha@...el.com>
Subject: Re: runnable tasks never making it to a runqueue where they can run?

On Wed, 2012-03-07 at 10:03 -0800, Roland Dreier wrote:
> 
> So could the fact that we don't have CPU 11 in our affinity
> mask cause the scheduler not to try the CPU 23 runqueue?
> Is it only looking at the first SMT sibling or something? 

Yes, I think we've conspired against your use-case here.

select_task_rq_fair() will very likely end up in select_idle_sibling()
which will never select the smt1 if smt0 is idle (even if smt0 isn't in
the cpus allowed mask) -- Vatsa is currently reworking that area, this
is something he might want to consider.

The regular load-balancer works by pulling tasks towards itself, this
works on the sched domains, it iterates bottom up through the domains,
but only the first cpu in the specific domain gets to go up.

This means that SMT1 will only ever pull from SMT0, and only SMT0 will
go up to the core level and (possibly) beyond to pull load. Now your
task -- due to its affinity mask -- can never be pulled to SMT0 and
hence will never end up on SMT1.

There's a number of very convoluted hacks in the load-balancer to
kinda-sort-of deal with affinity masks that are miss-aligned wrt the
domain setup, but they're painful and very likely lacking (as
demonstrated here).

For a while I've been toying with the idea of making the least-loaded
cpu in the mask go up in the balance pass -- as opposed to the first cpu
in the mask. This might be the justification, and push, I was needing.

In any case, load-balancing in the presence of affinity masks is an
'interesting' problem :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ