[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170301155414.GN6515@twins.programming.kicks-ass.net>
Date: Wed, 1 Mar 2017 16:54:14 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Fengguang Wu <fengguang.wu@...el.com>
Cc: Boqun Feng <boqun.feng@...il.com>,
Nicolai Hähnle <Nicolai.Haehnle@....com>,
Chris Wilson <chris@...is-wilson.co.uk>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
LKP <lkp@...org>
Subject: Re: [locking/ww_mutex] 2a0c112828 WARNING: CPU: 0 PID: 18 at
kernel/locking/mutex.c:305 __ww_mutex_wakeup_for_backoff
On Wed, Mar 01, 2017 at 11:40:43PM +0800, Fengguang Wu wrote:
> Thanks for the patch! I applied the patch on top of "locking/ww_mutex:
> Add kselftests for ww_mutex stress", and find no "bad unlock balance
> detected" but this warning. Attached is the new dmesg which is a bit
> large due to lots of repeated errors.
So with all the various patches it works for me.
I also have the following on top; which I did when I was looking through
this code trying to figure out wth was happening.
Chris, does this make sense to you?
It makes each loop a fully new 'instance', otherwise we'll never update
the ww_class->stamp and the threads will aways have the same order.
---
diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
index da6c9a34f62f..d0fd06429c9d 100644
--- a/kernel/locking/test-ww_mutex.c
+++ b/kernel/locking/test-ww_mutex.c
@@ -398,12 +398,11 @@ static void stress_inorder_work(struct work_struct *work)
if (!order)
return;
- ww_acquire_init(&ctx, &ww_class);
-
do {
int contended = -1;
int n, err;
+ ww_acquire_init(&ctx, &ww_class);
retry:
err = 0;
for (n = 0; n < nlocks; n++) {
@@ -433,9 +432,9 @@ static void stress_inorder_work(struct work_struct *work)
__func__, err);
break;
}
- } while (--stress->nloops);
- ww_acquire_fini(&ctx);
+ ww_acquire_fini(&ctx);
+ } while (--stress->nloops);
kfree(order);
kfree(stress);
@@ -470,9 +469,9 @@ static void stress_reorder_work(struct work_struct *work)
kfree(order);
order = NULL;
- ww_acquire_init(&ctx, &ww_class);
-
do {
+ ww_acquire_init(&ctx, &ww_class);
+
list_for_each_entry(ll, &locks, link) {
err = ww_mutex_lock(ll->lock, &ctx);
if (!err)
@@ -495,9 +494,9 @@ static void stress_reorder_work(struct work_struct *work)
dummy_load(stress);
list_for_each_entry(ll, &locks, link)
ww_mutex_unlock(ll->lock);
- } while (--stress->nloops);
- ww_acquire_fini(&ctx);
+ ww_acquire_fini(&ctx);
+ } while (--stress->nloops);
out:
list_for_each_entry_safe(ll, ln, &locks, link)
Powered by blists - more mailing lists