[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250212230002.95945-1-cpru@amazon.com>
Date: Wed, 12 Feb 2025 17:00:02 -0600
From: Cristian Prundeanu <cpru@...zon.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Cristian Prundeanu <cpru@...zon.com>, K Prateek Nayak
<kprateek.nayak@....com>, Hazem Mohamed Abuelfotoh <abuehaze@...zon.com>,
"Ali Saidi" <alisaidi@...zon.com>, Benjamin Herrenschmidt
<benh@...nel.crashing.org>, Geoff Blake <blakgeof@...zon.com>, Csaba Csoma
<csabac@...zon.com>, Bjoern Doebel <doebel@...zon.com>, Gautham Shenoy
<gautham.shenoy@....com>, Joseph Salisbury <joseph.salisbury@...cle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>, Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>, Borislav Petkov
<bp@...en8.de>, <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <linux-tip-commits@...r.kernel.org>,
<x86@...nel.org>
Subject: Re: [PATCH v2] [tip: sched/core] sched: Move PLACE_LAG and RUN_TO_PARITY to sysctl
>>> Moving PLACE_LAG and RUN_TO_PARITY to sysctl will allow users to override
>>> their default values and persist them with established mechanisms.
>>
>> Nope -- you have knobs in debugfs, and that's where they'll stay. Esp.
>> PLACE_LAG is super dodgy and should not get elevated to anything
>> remotely official.
>
> Just to clarify, the problem with NO_PLACE_LAG is that by discarding
> lag, a task can game the system to 'gain' time. It fundamentally breaks
> fairness, and the only reason I implemented it at all was because it is
> one of the 'official' placement strategies in the original paper.
Wouldn't this be an argument in favor of more official positioning of this
knob? It may be dodgy, but it's currently the best mitigation option,
until something better comes along.
> If the tasks are unconstrained / aperiodic, this goes out the window and
> the placement strategy becomes unsound. And given we must assume
> userspace to be malicious / hostile / unbehaved, the whole thing is just
> not good.
Userspace in general, absolutely. User intent should be king though, and
impairing the ability to do precisely what you want with your machine
feels like it stands against what Linux is best known (and often feared)
for: configurability. There is _another_ OS which has made a habit of
dictating how users should want to do something. We're not there of
course, but it's a strong cautionary tale.
To ask more specifically, isn't a strong point of EEVDF the fact that it
considers _more_ user needs and use cases than CFS (for instance, task
lag/latency)?
>> Conversely, setting NO_PLACE_LAG + NO_RUN_TO_PARITY is simply done at boot
>> time, and does not require further user effort.
>
> For your workload. It will wreck other workloads.
I'd like to invite you to name one real-life workload that would be
wrecked by allowing PL and RTP override in sysctl. I can name three that
are currently impacted (mysql, postgres, and wordpress), with only poor
means (increased effort, non-standard persistence leading to higher
maintenance cost, requirement for debugfs) to mitigate the regression.
> Yes, SCHED_BATCH might be more fiddly, but it allows for composition.
> You can run multiple workloads together and they all behave.
Shouldn't we leave that to the user to decide, though? Forcing a new
default configuration that only works well with multiple workloads can not
be the right thing for everyone - especially for large scale providers,
where servers and corresponding images are intended to run one main
workload. Importantly, things that used to run well and now don't.
> Maybe the right thing here is to get mysql patched; so that it will
> request BATCH itself for the threads that need it.
For mysql in particular, it's a possible avenue (though I still object to
the idea that individual users and vendors now need to put in additional
effort to maintain the same performance as before).
But on a larger picture, this reproducer is only meant as a simplified
illustration of the performance issues. It is not a single occurrence.
There are far more complex workloads where tuning at thread level is at
best impractical, or even downright impossible. Think of managed clusters
where the load distribution and corresponding task density are not user
controlled, or JVM workloads where individual threads are not even
designed to be managed externally, or containers built from external
dependencies where tuning a service is anything but trivial.
Are we really saying that everyone just needs to swallow the cost of this
change, or put up with the lower performance level? Even if the Linux
Kernel doesn't concern itself with business cost, surely at least the time
burned on this by both commercial and non-commercial projects cannot be
lost on you.
> Also, FYI, by keeping these emails threaded in the old thread I nearly
> missed them again. I'm not sure where this nonsense of keeping
> everything in one thread came from, but it is bloody stupid.
Thank you. This is a great opportunity for both of us to relate to the
opposing stance on this patch, and I hope you too will see the parallel:
My reason for threading was well intended. I value your time and wanted to
avoid you wasting it by having to search for the previous patch or older
threads on the same topic.
However, I ended up inadvertently creating an issue for your use case.
It, arguably, doesn't have a noticeable impact on my side, and it could be
avoided by you, the user, by configuring your email client to always
highlight messages directly addressed to you; assuming that your email
client supports it, and you are able and willing to invest the effort to
do it.
Nevertheless, this doesn't make it right.
I do apologize for the annoyance; it was not my intent to put additional
burden on you, only to have the same experience or efficiency that you are
used to having. I did consolidate the two recent threads into this one
though, because I believe that it's easier to follow by everyone else.
It may be a silly parallel, but please consider that similar frustration
is happening to many users who now are asked to put effort towards
bringing performance back to previous levels - if at all possible and
feasible - and at the same time are denied the right tools to do so.
Please consider that it took years for EEVDF commit messages to go from
"horribly messes up things" to "isn't perfect yet, but much closer", and
it may take years still until it's as stable, performant and vetted across
varied scenarios as CFS was in kernel 6.5.
Please consider that along this journey are countless users and groups who
would rather not wait for perfection, but have easy means to at least get
the same performance they were getting before.
-Cristian
Powered by blists - more mailing lists