lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1444355819.3232.62.camel@gmail.com>
Date:	Fri, 09 Oct 2015 03:56:59 +0200
From:	Mike Galbraith <umgwanakikbuti@...il.com>
To:	paul.szabo@...ney.edu.au
Cc:	linux-kernel@...r.kernel.org, peterz@...radead.org
Subject: Re: CFS scheduler unfairly prefers pinned tasks

On Fri, 2015-10-09 at 08:55 +1100, paul.szabo@...ney.edu.au wrote:
> Dear Mike,
> 
> >>> I see a fairness issue ... but one opposite to your complaint.
> >> Why is that opposite? ...
> >
> > Well, not exactly opposite, only opposite in that the one pert task also
> > receives MORE than it's fair share when unpinned.  Two 100$ hogs sharing
> > one CPU should each get 50% of that CPU. ...
> 
> But you are using CGROUPs, grouping all oinks into one group, and the
> one pert into another: requesting each group to get same total CPU.
> Since pert has one process only, the most he can get is 100% (not 400%),
> and it is quite OK for the oinks together to get 700%.

Well, that of course depends on what you call fair.  I realize why and
where it happens.  I told weight adjustment to keep its grubby mitts off
of autogroups, and of course the "problem" went away.  Back to the
viewpoint thing, with two users, each having been _placed_ in a group, I
can well imagine a user who is trying to use all of his authorized
bandwidth raising an eyebrow when he sees one of his tasks getting 24
whole milliseconds per second with an allegedly fair scheduler.

I can see it both ways.  What's going to come out of this is probably
going to be "tough titty, yes, group scheduling has side effects, and
this is one".  I already know it does.  Question is only whether the
weight adjustment gears are spinning as intended or not.

> > IFF ... massively parallel and synchronized ...
> 
> You would be making the assumption that you had the machine to yourself:
> might be the wrong thing to assume.

Yup, it would be a doomed attempt to run a load which cannot thrive in a
shared environment in such an environment.  Are any of the compute loads
you're having trouble with.. in the math department..  perhaps doing oh,
say complex math goop that feeds the output of one parallel computation
into the next parallel computation? :)

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ