lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkGtVO7uhcFXEeX6@gmail.com>
Date: Mon, 13 May 2024 08:04:04 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>, Will Deacon <will@...nel.org>,
	Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>,
	Borislav Petkov <bp@...en8.de>
Subject: [GIT PULL] locking changes for v6.10

Linus,

Please pull the latest locking/core Git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-core-2024-05-13

   # HEAD: 532453e7aa78f3962fb4d86caf40ff81ebf62160 locking/pvqspinlock/x86: Use _Q_LOCKED_VAL in PV_UNLOCK_ASM macro

Locking changes for v6.10:

 - Over a dozen code generation micro-optimizations for the atomic
   and spinlock code.

 - Add more __ro_after_init attributes

 - Robustify the lockdevent_*() macros

 Thanks,

	Ingo

------------------>
Peter Zijlstra (1):
      jump_label,module: Don't alloc static_key_mod for __ro_after_init keys

Uros Bizjak (15):
      locking/atomic/x86: Correct the definition of __arch_try_cmpxchg128()
      locking/atomic/x86: Modernize x86_32 arch_{,try_}_cmpxchg64{,_local}()
      locking/atomic/x86: Introduce arch_try_cmpxchg64() for !CONFIG_X86_CMPXCHG64
      locking/atomic/x86: Introduce arch_atomic64_try_cmpxchg() to x86_32
      locking/atomic/x86: Introduce arch_atomic64_read_nonatomic() to x86_32
      locking/atomic/x86: Rewrite x86_32 arch_atomic64_{,fetch}_{and,or,xor}() functions
      locking/atomic/x86: Define arch_atomic_sub() family using arch_atomic_add() functions
      locking/qspinlock: Use atomic_try_cmpxchg_relaxed() in xchg_tail()
      locking/pvqspinlock: Use try_cmpxchg_acquire() in trylock_clear_pending()
      locking/pvqspinlock: Use try_cmpxchg() in qspinlock_paravirt.h
      locking/pvqspinlock/x86: Remove redundant CMP after CMPXCHG in __raw_callee_save___pv_queued_spin_unlock()
      locking/atomic/x86: Introduce arch_try_cmpxchg64_local()
      locking/atomic/x86: Merge __arch{,_try}_cmpxchg64_emu_local() with __arch{,_try}_cmpxchg64_emu()
      locking/qspinlock/x86: Micro-optimize virt_spin_lock()
      locking/pvqspinlock/x86: Use _Q_LOCKED_VAL in PV_UNLOCK_ASM macro

Valentin Schneider (3):
      context_tracking: Make context_tracking_key __ro_after_init
      x86/kvm: Make kvm_async_pf_enabled __ro_after_init
      x86/tsc: Make __use_tsc __ro_after_init

Waiman Long (1):
      locking/qspinlock: Always evaluate lockevent* non-event parameter once


 arch/x86/include/asm/atomic.h             |  12 +-
 arch/x86/include/asm/atomic64_32.h        |  79 +++++++----
 arch/x86/include/asm/atomic64_64.h        |  12 +-
 arch/x86/include/asm/cmpxchg_32.h         | 209 ++++++++++++++++++------------
 arch/x86/include/asm/cmpxchg_64.h         |   8 +-
 arch/x86/include/asm/qspinlock.h          |  13 +-
 arch/x86/include/asm/qspinlock_paravirt.h |   7 +-
 arch/x86/kernel/kvm.c                     |   2 +-
 arch/x86/kernel/tsc.c                     |   2 +-
 include/asm-generic/sections.h            |   5 +
 include/linux/jump_label.h                |   3 +
 init/main.c                               |   1 +
 kernel/context_tracking.c                 |   2 +-
 kernel/jump_label.c                       |  53 ++++++++
 kernel/locking/lock_events.h              |   4 +-
 kernel/locking/qspinlock.c                |  13 +-
 kernel/locking/qspinlock_paravirt.h       |  49 ++++---
 17 files changed, 297 insertions(+), 177 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ