lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABk29Nu0JJ6xY_2SL0Y=iWstmoiRnRRnQ+Xvm3t_oU4sp72vpg@mail.gmail.com>
Date:   Tue, 13 Dec 2022 18:00:19 -0800
From:   Josh Don <joshdon@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Tejun Heo <tj@...nel.org>, torvalds@...ux-foundation.org,
        mingo@...hat.com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
        daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
        brho@...gle.com, pjt@...gle.com, derkling@...gle.com,
        haoluo@...gle.com, dvernet@...a.com, dschatzberg@...a.com,
        dskarlat@...cmu.edu, riel@...riel.com,
        linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
        kernel-team@...a.com, Peter Oskolkov <posk@...gle.com>
Subject: Re: [PATCH 31/31] sched_ext: Add a rust userspace hybrid example scheduler

> > and ignoring
> > the specifics of this example, the UMCG and sched_ext work are
> > complementary, but not mutually exclusive. UMCG is about driving
> > cooperative scheduling within a particular application. UMCG does not
> > have control over or react to external preemption,
>
> It can control preemption inside the process, and if you have the degree
> of control you need to make the whole BPF thing work, you also have the
> degree of control to ensure you only run the one server task on a CPU
> and all that no longer matters because there's only the process and you
> control preemption inside that.

To an extent yes, but this doesn't extend to the case where cpu is
overcommitted. Even if not by other applications, then responding to
preemption by, for example, kthreads (necessary for microsecond scale
workloads). But in general the common case is interference from other
applications, something which is handled by a system level scheduler
like sched_ext. The application vs system level control is an
important distinction here.

> > nor does it make thread placement decisions.
>
> It can do that just fine -- inside the process. UMCG has full control
> over which server task a worker task is associated with, then run a
> single server task per CPU and have them pinned and you get full
> placement control.

Again, this doesn't really scale past single server per cpu. It is not
feasible to partition systems in this way due to the loss of
efficiency.

> > sched_ext is considering things more at
> > the system level: arbitrating fairness and preemption between
> > processes, deciding when and where threads run, etc., and also being
> > able to take application-specific hints if desired.
>
> sched_ext does fundamentally not compose, you cannot run two different
> schedulers for two different application stacks that happen to co-reside
> on the same machine.

We're actually already developing a framework (and plan to share) to
support composing an arbitrary combination of schedulers. Essentially,
a "scheduler of schedulers". This supports the case, for example, of a
system that runs most tasks under some default SCX scheduler, but
allows a particular application or group of applications to utilize a
bespoke SCX scheduler of their own.

> sched_ext also sits at the very bottom of the class stack (it more or
> less has to) the result is that in order to use it at all, you have to
> have control over all runnable tasks in the system (a stray CFS task
> would interfere quite disastrously) but that is exactly the same
> constraint you need to make UMCG work.

UMCG still works when mixed with other tasks. You're specifying which
threads of your application you want running, but no guarantees are
made that they'll run right now if the system has other work to do.

SCX vs CFS is a more interesting story. Yes it is true that a single
CFS task could hog a cpu, but since SCX is managing things at a system
level, we feel that this is something that should be handled by system
administration. You shouldn't expect to mix cpu bound CFS tasks in the
same partition as threads running under SCX with good results.

> Conversely, it is very hard to use the BPF thing to do what UMCG can do.
> Using UMCG I can have a SCHED_DEADLINE server implement a task based
> pipeline schedule (something that's fairly common and really hard to
> pull off with just SCHED_DEADLINE itself).

UMCG and SCX are solving different problems though. An application can
decide execution order or control internal preemption via UMCG, while
SXC arbitrates allocation of system resources over time.

And, conversely, SCX can do things very difficult or impossible with
UMCG. For example, implementing core scheduling. Guaranteeing
microsecond scale tail latency. Applying a new scheduling algorithm
across multiple independent applications.

> Additionally, UMCG naturally works with things like Proxy Execution,
> seeing how the server task *is* a proxy for the current active worker
> task.

Proxy execution should also work with SCX; the enqueue/dequeue
abstraction can still be used to allow the SCX scheduler to select the
proxy.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ