lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e34f720326a03ac07e3156abf77f0f6c22ce4289.camel@surriel.com>
Date:   Fri, 19 Oct 2018 11:45:19 -0400
From:   Rik van Riel <riel@...riel.com>
To:     Frederic Weisbecker <frederic@...nel.org>
Cc:     "Jan H." Schönherr <jschoenh@...zon.de>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org,
        Subhra Mazumdar <subhra.mazumdar@...cle.com>
Subject: Re: [RFC 00/60] Coscheduling for Linux

On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
> > On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
> > > 
> > > Now, it would be possible to "invent" relocatable cpusets to
> > > address
> > > that
> > > issue ("I want affinity restricted to a core, I don't care
> > > which"),
> > > but
> > > then, the current way how cpuset affinity is enforced doesn't
> > > scale
> > > for
> > > making use of it from within the balancer. (The upcoming load
> > > balancing
> > > portion of the coscheduler currently uses a file similar to
> > > cpu.scheduled
> > > to restrict affinity to a load-balancer-controlled subset of the
> > > system.)
> > 
> > Oh boy, so the coscheduler is going to get its
> > own load balancer?
> > 
> > At that point, why bother integrating the
> > coscheduler into CFS, instead of making it its
> > own scheduling class?
> > 
> > CFS is already complicated enough that it borders
> > on unmaintainable. I would really prefer to have
> > the coscheduler code separate from CFS, unless
> > there is a really compelling reason to do otherwise.
> 
> I guess he wants to reuse as much as possible from the CFS features
> and
> code present or to come (nice, fairness, load balancing, power aware,
> NUMA aware, etc...).

I wonder if things like nice levels, fairness,
and balancing could be broken out into code
that could be reused from both CFS and a new
co-scheduler scheduling class.

A bunch of the cgroup code is already broken
out, but maybe some more could be broken out
and shared, too?

> OTOH you're right, the thing has specific enough requirements to
> consider a new sched policy. 

Some bits of functionality come to mind:
- track groups of tasks that should be co-scheduled
  (eg all the VCPUs of a virtual machine)
- track the subsets of those groups that are runnable
  (eg. the currently runnable VCPUs of a virtual machine)
- figure out time slots and CPU assignments to efficiently
  use CPU time for the co-scheduled tasks
  (while leaving some configurable(?) amount of CPU time 
  available for other tasks)
- configuring some lower-level code on each affected CPU
  to "run task A in slot X", etc

This really does not seem like something that could be
shoehorned into CFS without making it unmaintainable.

Furthermore, it also seems like the thing that you could
never really get into a highly efficient state as long
as it is weighed down by the rest of CFS.

-- 
All Rights Reversed.

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ