[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQluwllEnTxvyIgU@gmail.com>
Date: Tue, 19 Sep 2023 11:49:54 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Andy Lutomirski <luto@...nel.org>,
Ankur Arora <ankur.a.arora@...cle.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, the arch/x86 maintainers <x86@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
"Matthew Wilcox (Oracle)" <willy@...radead.org>, mgorman@...e.de,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Jon Grimm <jon.grimm@....com>, Bharata B Rao <bharata@....com>,
raghavendra.kt@....com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2 7/9] sched: define TIF_ALLOW_RESCHED
* Thomas Gleixner <tglx@...utronix.de> wrote:
> On Mon, Sep 18 2023 at 20:21, Andy Lutomirski wrote:
> > On Wed, Aug 30, 2023, at 11:49 AM, Ankur Arora wrote:
>
> > Why do we support anything other than full preempt? I can think of
> > two reasons, neither of which I think is very good:
> >
> > 1. Once upon a time, tracking preempt state was expensive. But we fixed that.
> >
> > 2. Folklore suggests that there's a latency vs throughput tradeoff,
> > and serious workloads, for some definition of serious, want
> > throughput, so they should run without full preemption.
>
> It's absolutely not folklore. Run to completion is has well known
> benefits as it avoids contention and avoids the overhead of scheduling
> for a large amount of scenarios.
>
> We've seen that painfully in PREEMPT_RT before we came up with the
> concept of lazy preemption for throughput oriented tasks.
Yeah, for a large majority of workloads reduction in preemption increases
batching and improves cache locality. Most scalability-conscious enterprise
users want longer timeslices & better cache locality, not shorter
timeslices with spread out cache use.
There's microbenchmarks that fit mostly in cache that benefit if work is
immediately processed by freshly woken tasks - but that's not true for most
workloads with a substantial real-life cache footprint.
Thanks,
Ingo
Powered by blists - more mailing lists