lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240513080359.GI30852@noisy.programming.kicks-ass.net>
Date: Mon, 13 May 2024 10:03:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: torvalds@...ux-foundation.org, mingo@...hat.com, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	bristot@...hat.com, vschneid@...hat.com, ast@...nel.org,
	daniel@...earbox.net, andrii@...nel.org, martin.lau@...nel.org,
	joshdon@...gle.com, brho@...gle.com, pjt@...gle.com,
	derkling@...gle.com, haoluo@...gle.com, dvernet@...a.com,
	dschatzberg@...a.com, dskarlat@...cmu.edu, riel@...riel.com,
	changwoo@...lia.com, himadrics@...ia.fr, memxor@...il.com,
	andrea.righi@...onical.com, joel@...lfernandes.org,
	linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
	kernel-team@...a.com
Subject: Re: [PATCHSET v6] sched: Implement BPF extensible scheduler class

On Sun, May 05, 2024 at 01:31:26PM -1000, Tejun Heo wrote:

> > You Google/Facebook are touting collaboration, collaborate on fixing it.
> > Instead of re-posting this over and over. After all, your main
> > motivation for starting this was the cpu-cgroup overhead.
> 
> The hierarchical scheduling overhead isn't the main motivation for us. We
> can't use the CPU controller for all workloads and while it'd be nice to
> improve that,

Hurmph, I had the impression from the earlier threads that this ~5%
cgroup overhead was most definitely a problem and a motivator for all
this.

The overhead was prohibitive, it was claimed, and you needed a solution.
Did not previous versions use this very argument in order to push for
all this?

By improving the cgroup mess -- I very much agree that the cgroup thing
is not very nice. This whole argument goes away and we all get a better
cgroup implementation.

> This view works only if you assume that the entire world contains only a
> handful of developers who can work on schedulers. The only way that would be
> the case is if the barrier of entry is raised unreasonably high. Sometimes a
> high barrier of entry can't be avoided or is beneficial. However, if it's
> pushed up high enough to leave only a handful of people to work on an area
> as large as scheduling, something probably is wrong.

I've never really felt there were too few sched patches to stare at on
any one day (quite the opposite on many days in fact).

There have also always been plenty out of tree scheduler patches --
although I rarely if ever have time to look at them.

Writing a custom scheduler isn't that hard, simply ripping out
fair_sched_class and replacing it with something simple really isn't
*that* hard.

The only really hard requirement is respecting affinities, you'll crash
and burn real hard if you get that wrong (think of all the per-cpu
kthreads that hard rely on the per-cpu-ness of them).

But you can easily ignore cgroups, uclamp and a ton of other stuff and
still boot and play around.

> I believe we agree that we want more people contributing to the scheduling
> area. 

I think therein lies the rub -- contribution. If we were to do this
thing, random loadable BPF schedulers, then how do we ensure people will
contribute back?

That is, from where I am sitting I see $vendor mandate their $enterprise
product needs their $BPF scheduler. At which point $vendor will have no
incentive to ever contribute back.

And customers of $vendor that want to run additional workloads on
their machine are then stuck with that scheduler, irrespective of it
being suitable for them or not. This is not a good experience.

So I don't at all mind people playing around with schedulers -- they can
do so today, there are a ton of out of tree patches to start or learn
from, or like I said, it really isn't all that hard to just rip out fair
and write something new.

Open source, you get to do your own thing. Have at.

But part of what made Linux work so well, is in my opinion the GPL. GPL
forces people to contribute back -- to work on the shared project. And I
see the whole BPF thing as a run-around on that.

Even the large cloud vendors and service providers (Amazon, Google,
Facebook etc.) contribute back because of rebase pain -- as you well
know. The rebase pain offsets the 'TIVO hole'.

But with the BPF muck; where is the motivation to help improve things?

Keeping a rando github repo with BPF schedulers is not contributing.
That's just a repo with multiple out of tree schedulers to be ignored.
Who will put in the effort of upsteaming things if they can hack up a
BPF and throw it over the wall?

So yeah, I'm very much NOT supportive of this effort. From where I'm
sitting there is simply not a single benefit. You're not making my life
better, so why would I care?

How does this BPF muck translate into better quality patches for me?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ