lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181019153316.GB15416@lerouge>
Date:   Fri, 19 Oct 2018 17:33:17 +0200
From:   Frederic Weisbecker <frederic@...nel.org>
To:     Rik van Riel <riel@...riel.com>
Cc:     Jan H. Schönherr <jschoenh@...zon.de>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org,
        Subhra Mazumdar <subhra.mazumdar@...cle.com>
Subject: Re: [RFC 00/60] Coscheduling for Linux

On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
> On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
> > 
> > Now, it would be possible to "invent" relocatable cpusets to address
> > that
> > issue ("I want affinity restricted to a core, I don't care which"),
> > but
> > then, the current way how cpuset affinity is enforced doesn't scale
> > for
> > making use of it from within the balancer. (The upcoming load
> > balancing
> > portion of the coscheduler currently uses a file similar to
> > cpu.scheduled
> > to restrict affinity to a load-balancer-controlled subset of the
> > system.)
> 
> Oh boy, so the coscheduler is going to get its
> own load balancer?
> 
> At that point, why bother integrating the
> coscheduler into CFS, instead of making it its
> own scheduling class?
> 
> CFS is already complicated enough that it borders
> on unmaintainable. I would really prefer to have
> the coscheduler code separate from CFS, unless
> there is a really compelling reason to do otherwise.

I guess he wants to reuse as much as possible from the CFS features and
code present or to come (nice, fairness, load balancing, power aware,
NUMA aware, etc...).

OTOH you're right, the thing has specific enough requirements to consider a new
sched policy. And really I would love to see all that code separate from CFS,
for the reasons you just outlined. So I cross my fingers on what Jan is going to
answer on a new policy.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ