lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250205090736.GY7145@noisy.programming.kicks-ass.net>
Date: Wed, 5 Feb 2025 10:07:36 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
	Thomas Gleixner <tglx@...utronix.de>,
	Ankur Arora <ankur.a.arora@...cle.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>, linux-mm@...ck.org,
	x86@...nel.org, akpm@...ux-foundation.org, luto@...nel.org,
	bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
	juri.lelli@...hat.com, vincent.guittot@...aro.org,
	willy@...radead.org, mgorman@...e.de, jon.grimm@....com,
	bharata@....com, raghavendra.kt@....com, boris.ostrovsky@...cle.com,
	konrad.wilk@...cle.com, jgross@...e.com, andrew.cooper3@...rix.com,
	Joel Fernandes <joel@...lfernandes.org>,
	Vineeth Pillai <vineethrp@...gle.com>,
	Suleiman Souhlal <suleiman@...gle.com>,
	Ingo Molnar <mingo@...nel.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Clark Williams <clark.williams@...il.com>, bigeasy@...utronix.de,
	daniel.wagner@...e.com, joseph.salisbury@...cle.com,
	broonie@...il.com
Subject: Re: [RFC][PATCH 1/2] sched: Extended scheduler time slice

On Tue, Feb 04, 2025 at 11:11:19AM -0500, Steven Rostedt wrote:
> On Tue, 4 Feb 2025 16:30:53 +0100
> Peter Zijlstra <peterz@...radead.org> wrote:
> 
> > If you go back and reread that initial thread, you'll find the 50us is
> > below the scheduling latency that random test box already had.
> > 
> > I'm sure more modern systems will have a lower number, and slower
> > systems will have a larger number, but we got to pick a number :/
> > 
> > I'm fine with making it 20us. Or whatever. Its just a stupid number.
> > 
> > But yes. If we're going to be doing this, there is absolutely no reason
> > not to allow DEADLINE/FIFO threads the same. Misbehaving FIFO is already
> > a problem, and we can make DL-CBS enforcement punch through it if we
> > have to.
> > 
> > And less retries on the RSEQ for FIFO can equally improve performance.
> > 
> > There is no difference between a 'malicious/broken' userspace consuming
> > the entire window in userspace (50us, 20us whatever it will be) and
> > doing a system call which we know will cause similar delays because it
> > does in-kernel locking.
> 
> This is where we will disagree for the reasons I explained in my second
> email. This feature affects other tasks. And no, making it 20us doesn't
> make it better. Because from what I get from you, if we implement this, it
> will be available for all preemption methods (including PREEMPT_RT), where
> we do have less than 50us latency, and and even a 20us will break those
> applications.

Then pick another number; RT too has a max scheduling latency number (on
some random hardware). If you stay below that, all is fine. 

> This was supposed to be only a hint to the kernel, not a complete feature

That's a contradiction in terms -- even a hint is a feature.

> that is hard coded and will override how other tasks behave. 

Everything has some effect. My point is that if you limit this effect to
be less than what it can already effect, you're not making things worse.

> As system
> calls themselves can make how things are scheduled depending on the
> preemption method,

What?

> I didn't want to add something that will change how
> things are scheduled that ignores the preemption method that was chosen.

Userspace is totally oblivious to the preemption method chosen, and it
damn well should be.




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ