[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <169537029684.27769.17114350620697997504.tip-bot2@tip-bot2>
Date: Fri, 22 Sep 2023 08:11:36 -0000
From: "tip-bot2 for John Stultz" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Li Zhijian <zhijianx.li@...el.com>,
John Stultz <jstultz@...gle.com>,
Ingo Molnar <mingo@...nel.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: locking/core] locking/ww_mutex/test: Make sure we bail out
instead of livelock
The following commit has been merged into the locking/core branch of tip:
Commit-ID: cfa92b6d52071aaa8f27d21affdcb14e7448fbc1
Gitweb: https://git.kernel.org/tip/cfa92b6d52071aaa8f27d21affdcb14e7448fbc1
Author: John Stultz <jstultz@...gle.com>
AuthorDate: Fri, 22 Sep 2023 04:36:01
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Fri, 22 Sep 2023 09:43:41 +02:00
locking/ww_mutex/test: Make sure we bail out instead of livelock
I've seen what appears to be livelocks in the stress_inorder_work()
function, and looking at the code it is clear we can have a case
where we continually retry acquiring the locks and never check to
see if we have passed the specified timeout.
This patch reworks that function so we always check the timeout
before iterating through the loop again.
I believe others may have hit this previously here:
https://lore.kernel.org/lkml/895ef450-4fb3-5d29-a6ad-790657106a5a@intel.com/
Reported-by: Li Zhijian <zhijianx.li@...el.com>
Signed-off-by: John Stultz <jstultz@...gle.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Link: https://lore.kernel.org/r/20230922043616.19282-4-jstultz@google.com
---
kernel/locking/test-ww_mutex.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
index 358d661..78719e1 100644
--- a/kernel/locking/test-ww_mutex.c
+++ b/kernel/locking/test-ww_mutex.c
@@ -465,17 +465,18 @@ retry:
ww_mutex_unlock(&locks[order[n]]);
if (err == -EDEADLK) {
- ww_mutex_lock_slow(&locks[order[contended]], &ctx);
- goto retry;
+ if (!time_after(jiffies, stress->timeout)) {
+ ww_mutex_lock_slow(&locks[order[contended]], &ctx);
+ goto retry;
+ }
}
+ ww_acquire_fini(&ctx);
if (err) {
pr_err_once("stress (%s) failed with %d\n",
__func__, err);
break;
}
-
- ww_acquire_fini(&ctx);
} while (!time_after(jiffies, stress->timeout));
kfree(order);
Powered by blists - more mailing lists