lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1504578554-4137-3-git-send-email-byungchul.park@lge.com>
Date:   Tue,  5 Sep 2017 11:29:13 +0900
From:   Byungchul Park <byungchul.park@....com>
To:     tj@...nel.org, johannes.berg@...el.com, peterz@...radead.org,
        mingo@...nel.org
Cc:     tglx@...utronix.de, oleg@...hat.com, david@...morbit.com,
        linux-kernel@...r.kernel.org, kernel-team@....com
Subject: [PATCH 2/3] lockdep: Introduce lock_acquire_might()

>From the point of view of crossrelease, we can never be aware of the
release context in advance, until we get to the lock_release().
However, this way we cannot report deadlocks occured at the time.

Sometimes, we want to report that kind of problems, taking a risk
generating false dependencies e.g. lock_acquire()s in workqueue code,
which inevitably generate false ones with all acquisitions in works.

It would be better to provide another primitive, lock_acquire_might()
for that purpose so that lockdep internal can be aware of what users
expect and get chances to enhance to avoid false ones.

The primitive should:

   1. work as if it's trylock, since links between lock_acquire_might()
      and later ones are only meaningful. Remind this should be used to
      do what crossrelease commit does, in advance.

   2. make acquisitions by lock_acquire_might() ignored on the commit.

TODO: Although it's worth reporting potential problems, we have to do
our best to avoid adding false dependencies into graph, while current
lockdep code(not only crossrelease code) does nothing to care it.

Signed-off-by: Byungchul Park <byungchul.park@....com>
---
 include/linux/lockdep.h  |  2 ++
 kernel/locking/lockdep.c | 20 ++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index bfa8e0b..68c806a 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -679,6 +679,7 @@ static inline void print_irqtrace_events(struct task_struct *curr)
 #define lock_acquire_exclusive(l, s, t, n, i)		lock_acquire(l, s, t, 0, 1, n, i)
 #define lock_acquire_shared(l, s, t, n, i)		lock_acquire(l, s, t, 1, 1, n, i)
 #define lock_acquire_shared_recursive(l, s, t, n, i)	lock_acquire(l, s, t, 2, 1, n, i)
+#define lock_acquire_might(l, s, t, n, i)		lock_acquire(l, s, t, 3, 1, n, i)
 
 #define spin_acquire(l, s, t, i)		lock_acquire_exclusive(l, s, t, NULL, i)
 #define spin_acquire_nest(l, s, t, n, i)	lock_acquire_exclusive(l, s, t, n, i)
@@ -704,6 +705,7 @@ static inline void print_irqtrace_events(struct task_struct *curr)
 #define lock_map_acquire(l)			lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_)
 #define lock_map_acquire_read(l)		lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_)
 #define lock_map_acquire_tryread(l)		lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_)
+#define lock_map_acquire_might(l)		lock_acquire_might(l, 0, 1, NULL, _THIS_IP_)
 #define lock_map_release(l)			lock_release(l, 1, _THIS_IP_)
 
 #ifdef CONFIG_PROVE_LOCKING
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 0ac2f70..963c176 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -80,9 +80,25 @@
 #define LT_READ_N 1 /* 1: read non-recursion type */
 #define LT_READ_R 2 /* 2: read recursion type */
 
+/*
+ * This is used to detect deadlocks at the real time for
+ * crosslocks. For example, workqueue code uses acquisitions
+ * manually for that purpose taking a risk creating false
+ * dependencies. It would be better for lockdep to be aware
+ * what users such as workqueue code want in that case and
+ * get chances to enhance lockdep internal code to avoid
+ * false dependencies as far as possible.
+ *
+ * TODO: For now, this flag is only used to skip this kind
+ * of acquisitions on commit. But, adding these false ones
+ * into graph itself should be avoided if possible.
+ */
+#define LT_MIGHT  3 /* 3: might acquire */
+
 #define is_write(a)  ((a) == LT_WRITE)
 #define is_read(a)   ((a) == LT_READ_N || (a) == LT_READ_R)
 #define is_read_r(a) ((a) == LT_READ_R)
+#define is_might(a)  ((a) == LT_MIGHT)
 
 /*
  * lockdep_lock: protects the lockdep graph, the hashes and the
@@ -4827,7 +4843,7 @@ static inline struct lock_class *xlock_class(struct cross_lock *xlock)
  */
 static inline int depend_before(struct held_lock *hlock)
 {
-	return !is_read_r(hlock->read) && hlock->check && !hlock->trylock;
+	return !is_read_r(hlock->read) && !is_might(hlock->read) && hlock->check && !hlock->trylock;
 }
 
 /*
@@ -4835,7 +4851,7 @@ static inline int depend_before(struct held_lock *hlock)
  */
 static inline int depend_after(struct held_lock *hlock)
 {
-	return !is_read_r(hlock->read) && hlock->check;
+	return !is_read_r(hlock->read) && !is_might(hlock->read) && hlock->check;
 }
 
 /*
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ