lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220204150557.434610-1-mark.rutland@arm.com>
Date:   Fri,  4 Feb 2022 15:05:50 +0000
From:   Mark Rutland <mark.rutland@....com>
To:     linux-arm-kernel@...ts.infradead.org
Cc:     ardb@...nel.org, bp@...en8.de, catalin.marinas@....com,
        dave.hansen@...ux.intel.com, frederic@...nel.org,
        james.morse@....com, joey.gouly@....com, juri.lelli@...hat.com,
        linux-kernel@...r.kernel.org, luto@...nel.org,
        mark.rutland@....com, mingo@...hat.com, peterz@...radead.org,
        tglx@...utronix.de, valentin.schneider@....com, will@...nel.org
Subject: [PATCH v2 0/7] arm64 / sched/preempt: support PREEMPT_DYNAMIC with static keys

This series enables PREEMPT_DYNAMIC on arm64. To do so, it adds a new
mechanism allowing the preemption functions to be enabled/disabled using
static keys rather than static calls, with architectures selecting
whether they use static calls or static keys.

With non-inline static calls, each function call results in a call to
the (out-of-line) trampoline which either tail-calls its associated
callee or performs an early return.

The key idea is that where we're only enabling/disabling a single
callee, we can inline this trampoline into the start of the callee,
using a static key to decide whether to return early, and leaving the
remaining codegen to the compiler. The overhead should be similar to
(and likely lower than) using a static call trampoline. Since most
codegen is up to the compiler, we sidestep a number of implementation
pain-points (e.g. things like CFI should "just work" as well as they do
for any other functions).

The bulk of the diffstat for kernel/sched/core.c is shuffling the
PREEMPT_DYNAMIC code later in the file, and the actual additions are
fairly trivial.

I've given this very light build+boot testing so far.

Since v1 [1]:
* Rework Kconfig text to be clearer
* Rework arm64 entry code
* Clarify commit messages.

[1] https://lore.kernel.org/all/20211109172408.49641-1-mark.rutland@arm.com/

Mark Rutland (7):
  sched/preempt: move PREEMPT_DYNAMIC logic later
  sched/preempt: refactor sched_dynamic_update()
  sched/preempt: simplify irqentry_exit_cond_resched() callers
  sched/preempt: decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY
  sched/preempt: add PREEMPT_DYNAMIC using static keys
  arm64: entry: centralize premeption decision
  arm64: support PREEMPT_DYNAMIC

 arch/Kconfig                     |  37 +++-
 arch/arm64/Kconfig               |   1 +
 arch/arm64/include/asm/preempt.h |  16 +-
 arch/arm64/kernel/entry-common.c |  28 ++-
 arch/x86/Kconfig                 |   2 +-
 arch/x86/include/asm/preempt.h   |  10 +-
 include/linux/entry-common.h     |  15 +-
 include/linux/kernel.h           |   7 +-
 include/linux/sched.h            |  10 +-
 kernel/entry/common.c            |  22 +-
 kernel/sched/core.c              | 346 ++++++++++++++++++-------------
 11 files changed, 322 insertions(+), 172 deletions(-)

-- 
2.30.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ