[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240513142646.4dc5484d@rorschach.local.home>
Date: Mon, 13 May 2024 14:26:46 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Tejun Heo <tj@...nel.org>, torvalds@...ux-foundation.org,
mingo@...hat.com, juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
joshdon@...gle.com, brho@...gle.com, pjt@...gle.com, derkling@...gle.com,
haoluo@...gle.com, dvernet@...a.com, dschatzberg@...a.com,
dskarlat@...cmu.edu, riel@...riel.com, changwoo@...lia.com,
himadrics@...ia.fr, memxor@...il.com, andrea.righi@...onical.com,
joel@...lfernandes.org, linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [PATCHSET v6] sched: Implement BPF extensible scheduler class
On Mon, 13 May 2024 10:03:59 +0200
Peter Zijlstra <peterz@...radead.org> wrote:
> > I believe we agree that we want more people contributing to the scheduling
> > area.
>
> I think therein lies the rub -- contribution. If we were to do this
> thing, random loadable BPF schedulers, then how do we ensure people will
> contribute back?
Hi Peter,
I'm somewhat agnostic to sched_ext itself, but I have been an advocate
for a plugable scheduler infrastructure. And we are seriously looking
at adding it to ChromeOS.
>
> That is, from where I am sitting I see $vendor mandate their $enterprise
> product needs their $BPF scheduler. At which point $vendor will have no
> incentive to ever contribute back.
Believe me they already have their own scheduler, and because its so
different, it's very hard to contribute back.
>
> And customers of $vendor that want to run additional workloads on
> their machine are then stuck with that scheduler, irrespective of it
> being suitable for them or not. This is not a good experience.
And $vendor usually has a unique workload that their changes will
likely cause regressions in other workloads, making it even harder to
contribute back.
>
> So I don't at all mind people playing around with schedulers -- they can
> do so today, there are a ton of out of tree patches to start or learn
> from, or like I said, it really isn't all that hard to just rip out fair
> and write something new.
For cloud servers, I bet a lot of schedulers are not public. Although,
my company tries to publish the schedulers they use.
>
> Open source, you get to do your own thing. Have at.
>
> But part of what made Linux work so well, is in my opinion the GPL. GPL
> forces people to contribute back -- to work on the shared project. And I
> see the whole BPF thing as a run-around on that.
>
> Even the large cloud vendors and service providers (Amazon, Google,
> Facebook etc.) contribute back because of rebase pain -- as you well
> know. The rebase pain offsets the 'TIVO hole'.
From what I understand (I don't work on production, but Chromebooks), a
lot of changes cannot be contributed back because their updates are far
from what is upstream. Having a plugable scheduler would actually allow
them to contribute *more*.
>
> But with the BPF muck; where is the motivation to help improve things?
For the same reasons you mention about GPL and why it works.
Collaboration. Sharing ideas helps everyone. If there's some secret
sauce scheduler then they would likely just replace the scheduler, as
its more performant. I don't believe it would be worth while to use BPF
for that purpose.
>
> Keeping a rando github repo with BPF schedulers is not contributing.
Agreed, and I would guess having them in the Linux kernel tree would be
more beneficial.
> That's just a repo with multiple out of tree schedulers to be ignored.
> Who will put in the effort of upsteaming things if they can hack up a
> BPF and throw it over the wall?
If there's a place in the Linux kernel tree, I'm sure there would be
motivation to place it there. Having it in the kernel proper does give
more visibility of code, and therefore enhancements to that code. This
was the same rationale for putting perf into the kernel proper.
>
> So yeah, I'm very much NOT supportive of this effort. From where I'm
> sitting there is simply not a single benefit. You're not making my life
> better, so why would I care?
>
> How does this BPF muck translate into better quality patches for me?
Here's how we will be using it (we will likely be porting sched_ext to
ChromeOS regardless of its acceptance).
Doing testing of scheduler changes in the field is extremely time
consuming and complex. We tested EEVDF vs CFS by backporting EEVDF to
5.15 (as that is the kernel version we are using on the chromebooks we
were testing on), and then we need to add a user space "switch" to
change the scheduler. Note, this also risks causing a bug in adding
these changes. Then we push the kernel out, and then start our
experiment that enables our feature to a small percentage, and slowly
increases the number of users until we have a enough for a statistical
result.
What sched_ext would give us is a easy way to try different scheduling
algorithms and get feedback much quicker. Once we determine a solution
that improves things, we would then spend the time to implement it in
the scheduler, and yes, send it upstream.
To me, sched_ext should never be the final solution, but it can be
extremely useful in testing various changes quickly in the field. Which
to me would encourage more contributions.
-- Steve
Powered by blists - more mailing lists