lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Tue, 12 Mar 2024 12:39:09 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Jonathan Corbet <corbet@....net>, Ingo Molnar <mingo@...hat.com>, 
	Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>, 
	Vincent Guittot <vincent.guittot@...aro.org>, Will Deacon <will@...nel.org>
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Sean Christopherson <seanjc@...gle.com>, Valentin Schneider <valentin.schneider@....com>, 
	Marco Elver <elver@...gle.com>, Frederic Weisbecker <frederic@...nel.org>, 
	David Matlack <dmatlack@...gle.com>, Friedrich Weber <f.weber@...xmox.com>, 
	Ankur Arora <ankur.a.arora@...cle.com>, Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH v2 0/2] sched/core: Fix spinlocks vs. PREEMPT_DYNAMIC=y

Fix a bug in dynamic preemption where the kernel will yield contended
spinlocks (and rwlocks) even if the selected preemption model is "none" or
"voluntary".  I say "bug" because this divergence from PREEMPT_DYNAMIC=n
behavior effectively broke existing KVM configurations, e.g. vCPUs would
get stuck and become unresponsive for multiple seconds if there was heavy
KSM or NUMA balancing activity in the host.

This isn't super urgent, as 6.8 has a fix in KVM for the over-aggressive
yielding (commit d02c357e5bfa ("KVM: x86/mmu: Retry fault before acquiring
mmu_lock if mapping is changing"), but I wouldn't be surprised if the
behavior is causing other performance issues/regressions that are less
severe and/or less visible.

v2:
 - Rebase onto Linus' tree to deal with the code movement to spinlock.h.
 - Opportunistically document the behavior.
 - Add the PREEMPT_AUTO folks to Cc to get their eyeballs/input.

v1: https://lore.kernel.org/all/20240110214723.695930-1-seanjc@google.com

Sean Christopherson (2):
  sched/core: Move preempt_model_*() helpers from sched.h to preempt.h
  sched/core: Drop spinlocks on contention iff kernel is preemptible

 .../admin-guide/kernel-parameters.txt         |  4 +-
 include/linux/preempt.h                       | 41 +++++++++++++++++++
 include/linux/sched.h                         | 41 -------------------
 include/linux/spinlock.h                      | 14 +++----
 4 files changed, 50 insertions(+), 50 deletions(-)


base-commit: b29f377119f68b942369a9366bdcb1fec82b2cda
-- 
2.44.0.278.ge034bb2e1d-goog


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ