[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <159708609435.2571.13948681727529247231.tglx@nanos>
Date: Mon, 10 Aug 2020 19:01:34 -0000
From: Thomas Gleixner <tglx@...utronix.de>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org
Subject: [GIT pull] locking/urgent for 5.9-rc1
Linus,
please pull the latest locking/urgent branch from:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git locking-urgent-2020-08-10
up to: 0cd39f4600ed: locking/seqlock, headers: Untangle the spaghetti monster
A set of locking fixes and updates:
- Untangle the header spaghetti which causes build failures in various
situations caused by the lockdep additions to seqcount to validate that
the write side critical sections are non-preemptible.
- The seqcount associated lock debug addons which were blocked by the
above fallout.
seqcount writers contrary to seqlock writers must be externally
serialized, which usually happens via locking - except for strict per
CPU seqcounts. As the lock is not part of the seqcount, lockdep cannot
validate that the lock is held.
This new debug mechanism adds the concept of associated locks.
sequence count has now lock type variants and corresponding
initializers which take a pointer to the associated lock used for
writer serialization. If lockdep is enabled the pointer is stored and
write_seqcount_begin() has a lockdep assertion to validate that the
lock is held.
Aside of the type and the initializer no other code changes are
required at the seqcount usage sites. The rest of the seqcount API is
unchanged and determines the type at compile time with the help of
_Generic which is possible now that the minimal GCC version has been
moved up.
Adding this lockdep coverage unearthed a handful of seqcount bugs which
have been addressed already independent of this.
While generaly useful this comes with a Trojan Horse twist: On RT
kernels the write side critical section can become preemtible if the
writers are serialized by an associated lock, which leads to the well
known reader preempts writer livelock. RT prevents this by storing the
associated lock pointer independent of lockdep in the seqcount and
changing the reader side to block on the lock when a reader detects
that a writer is in the write side critical section.
- Conversion of seqcount usage sites to associated types and initializers.
Thanks,
tglx
------------------>
Ahmed S. Darwish (16):
seqlock: Extend seqcount API with associated locks
seqlock: Align multi-line macros newline escapes at 72 columns
dma-buf: Remove custom seqcount lockdep class key
dma-buf: Use sequence counter with associated wound/wait mutex
sched: tasks: Use sequence counter with associated spinlock
netfilter: conntrack: Use sequence counter with associated spinlock
netfilter: nft_set_rbtree: Use sequence counter with associated rwlock
xfrm: policy: Use sequence counters with associated lock
timekeeping: Use sequence counter with associated raw spinlock
vfs: Use sequence counter with associated spinlock
raid5: Use sequence counter with associated spinlock
iocost: Use sequence counter with associated spinlock
NFSv4: Use sequence counter with associated spinlock
userfaultfd: Use sequence counter with associated spinlock
kvm/eventfd: Use sequence counter with associated spinlock
hrtimer: Use sequence counter with associated raw spinlock
Chris Wilson (1):
locking/lockdep: Fix overflow in presentation of average lock-time
Ingo Molnar (1):
x86/headers: Remove APIC headers from <asm/smp.h>
Peter Zijlstra (7):
seqlock: s/__SEQ_LOCKDEP/__SEQ_LOCK/g
seqlock: Fold seqcount_LOCKNAME_t definition
seqlock: Fold seqcount_LOCKNAME_init() definition
seqcount: Compress SEQCNT_LOCKNAME_ZERO()
seqcount: More consistent seqprop names
locking, arch/ia64: Reduce <asm/smp.h> header dependencies by moving XTP bits into the new <asm/xtp.h> header
locking/seqlock, headers: Untangle the spaghetti monster
Powered by blists - more mailing lists