[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190829083132.22394-1-duyuyang@gmail.com>
Date: Thu, 29 Aug 2019 16:31:02 +0800
From: Yuyang Du <duyuyang@...il.com>
To: peterz@...radead.org, will.deacon@....com, mingo@...nel.org
Cc: bvanassche@....org, ming.lei@...hat.com, frederic@...nel.org,
tglx@...utronix.de, linux-kernel@...r.kernel.org,
longman@...hat.com, paulmck@...ux.vnet.ibm.com,
boqun.feng@...il.com, Yuyang Du <duyuyang@...il.com>
Subject: [PATCH v4 00/30] Support recursive-read lock deadlock detection
Hi Peter and Ingo,
This patchset proposes a general read-write lock deadlock detection
algorithm on the premise that exclusive locks can be seen as a
special/partial usage of read-write locks. The current Linux kernel
locks are all considered in this algorithm. Prominently, the
recursive-read lock can be well supported, which has not been for more
than a decade.
The bulk of the algorithm is in patch #27. Now that the recursive-read
locks are suppported, we have all the 262 cases passed.
Please, some of the minor fix and/or no-functional-change patches may
be reviewed at least.
Changes from v3:
- Reworded some changelogs
- Rebased to current code base
- Per Boqun's suggestion, reordered patches
- Much more time elapsed
Changes from v2:
- Handle correctly rwsem locks hopefully.
- Remove indirect dependency redundancy check.
- Check direct dependency redundancy before validation.
- Compose lock chains for those with trylocks or separated by trylocks.
- Map lock dependencies to lock chains.
- Consolidate forward and backward lock_lists.
- Clearly and formally define two-task model for lockdep.
--
Yuyang Du (30):
locking/lockdep: Rename deadlock check functions
locking/lockdep: Change return type of add_chain_cache()
locking/lockdep: Change return type of lookup_chain_cache_add()
locking/lockdep: Pass lock chain from validate_chain() to
check_prev_add()
locking/lockdep: Add lock chain list_head field in struct lock_list
and lock_chain
locking/lockdep: Update comments in struct lock_list and held_lock
locking/lockdep: Remove indirect dependency redundancy check
locking/lockdep: Skip checks if direct dependency is already present
locking/lockdep: Remove chain_head argument in validate_chain()
locking/lockdep: Remove useless lock type assignment
locking/lockdep: Remove irq-safe to irq-unsafe read check
locking/lockdep: Specify the depth of current lock stack in
lookup_chain_cache_add()
locking/lockdep: Treat every lock dependency as in a new lock chain
locking/lockdep: Combine lock_lists in struct lock_class into an array
locking/lockdep: Consolidate forward and backward lock_lists into one
locking/lockdep: Add lock chains to direct lock dependency graph
locking/lockdep: Use lock type enum to explicitly specify read or
write locks
ocking/lockdep: Add read-write type for a lock dependency
locking/lockdep: Add helper functions to operate on the searched path
locking/lockdep: Update direct dependency's read-write type if it
exists
locking/lockdep: Introduce chain_hlocks_type for held lock's
read-write type
locking/lockdep: Hash held lock's read-write type into chain key
locking/lockdep: Adjust BFS algorithm to support multiple matches
locking/lockdep: Define the two task model for lockdep checks formally
locking/lockdep: Introduce mark_lock_unaccessed()
locking/lockdep: Add nest lock type
locking/lockdep: Add lock exclusiveness table
locking/lockdep: Support read-write lock's deadlock detection
locking/lockdep: Adjust selftest case for recursive read lock
locking/lockdep: Add more lockdep selftest cases
include/linux/lockdep.h | 91 ++-
include/linux/rcupdate.h | 2 +-
kernel/locking/lockdep.c | 1227 ++++++++++++++++++++++++------------
kernel/locking/lockdep_internals.h | 3 +-
kernel/locking/lockdep_proc.c | 8 +-
lib/locking-selftest.c | 1109 +++++++++++++++++++++++++++++++-
6 files changed, 1981 insertions(+), 459 deletions(-)
--
1.8.3.1
Powered by blists - more mailing lists