[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMj1kXGKRc8NNQWDpgLL_=G2DWYv6wXcgkpFw=H98LxTHjpq+w@mail.gmail.com>
Date: Thu, 10 Feb 2022 10:29:21 +0100
From: Ard Biesheuvel <ardb@...nel.org>
To: Mark Rutland <mark.rutland@....com>
Cc: Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Frederic Weisbecker <frederic@...nel.org>,
James Morse <james.morse@....com>, joey.gouly@....com,
Juri Lelli <juri.lelli@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Valentin Schneider <valentin.schneider@....com>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH v3 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC
with static keys
On Wed, 9 Feb 2022 at 16:35, Mark Rutland <mark.rutland@....com> wrote:
>
> This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
> mechanism allowing the preemption functions to be enabled/disabled using
> static keys rather than static calls, with architectures selecting
> whether they use static calls or static keys.
>
> With non-inline static calls, each function call results in a call to
> the (out-of-line) trampoline which either tail-calls its associated
> callee or performs an early return.
>
> The key idea is that where we're only enabling/disabling a single
> callee, we can inline this trampoline into the start of the callee,
> using a static key to decide whether to return early, and leaving the
> remaining codegen to the compiler. The overhead should be similar to
> (and likely lower than) using a static call trampoline. Since most
> codegen is up to the compiler, we sidestep a number of implementation
> pain-points (e.g. things like CFI should "just work" as well as they do
> for any other functions).
>
> The bulk of the diffstat for kernel/sched/core.c is shuffling the
> PREEMPT_DYNAMIC code later in the file, and the actual additions are
> fairly trivial.
>
> I've given this very light build+boot testing so far.
>
> Since v1 [1]:
> * Rework Kconfig text to be clearer
> * Rework arm64 entry code
> * Clarify commit messages.
>
> Since v2 [2]:
> * Add missing includes
> * Always provide prototype for preempt_schedule()
> * Always provide prototype for preempt_schedule_notrace()
> * Fix __cond_resched() to default to disabled
> * Fix might_resched() to default to disabled
> * Clarify example in commit message
>
> [1] https://lore.kernel.org/r/20211109172408.49641-1-mark.rutland@arm.com/
> [2] https://lore.kernel.org/r/20220204150557.434610-1-mark.rutland@arm.com/
>
> Mark Rutland (7):
> sched/preempt: move PREEMPT_DYNAMIC logic later
> sched/preempt: refactor sched_dynamic_update()
> sched/preempt: simplify irqentry_exit_cond_resched() callers
> sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
> sched/preempt: add PREEMPT_DYNAMIC using static keys
> arm64: entry: centralize premeption decision
> arm64: support PREEMPT_DYNAMIC
>
Acked-by: Ard Biesheuvel <ardb@...nel.org>
> arch/Kconfig | 37 +++-
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/preempt.h | 19 +-
> arch/arm64/kernel/entry-common.c | 28 ++-
> arch/x86/Kconfig | 2 +-
> arch/x86/include/asm/preempt.h | 10 +-
> include/linux/entry-common.h | 15 +-
> include/linux/kernel.h | 7 +-
> include/linux/sched.h | 10 +-
> kernel/entry/common.c | 23 +-
> kernel/sched/core.c | 347 ++++++++++++++++++-------------
> 11 files changed, 327 insertions(+), 172 deletions(-)
>
> --
> 2.30.2
>
Powered by blists - more mailing lists