[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-38460a2178d225b39ade5ac66586c3733391cf86@git.kernel.org>
Date: Thu, 10 Mar 2016 03:06:14 -0800
From: tip-bot for Davidlohr Bueso <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, tglx@...utronix.de, dbueso@...e.de,
hpa@...or.com, torvalds@...ux-foundation.org, mingo@...nel.org,
peterz@...radead.org
Subject: [tip:locking/core] locking/csd_lock: Use smp_cond_acquire() in
csd_lock_wait()
Commit-ID: 38460a2178d225b39ade5ac66586c3733391cf86
Gitweb: http://git.kernel.org/tip/38460a2178d225b39ade5ac66586c3733391cf86
Author: Davidlohr Bueso <dave@...olabs>
AuthorDate: Wed, 9 Mar 2016 17:55:36 -0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Thu, 10 Mar 2016 10:28:35 +0100
locking/csd_lock: Use smp_cond_acquire() in csd_lock_wait()
We can micro-optimize this call and mildly relax the
barrier requirements by relying on ctrl + rmb, keeping
the acquire semantics. In addition, this is pretty much
the now standard for busy-waiting under such restraints.
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: dave@...olabs.net
Link: http://lkml.kernel.org/r/1457574936-19065-3-git-send-email-dbueso@suse.de
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/smp.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/smp.c b/kernel/smp.c
index 5099db1..300d293 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -107,8 +107,7 @@ void __init call_function_init(void)
*/
static __always_inline void csd_lock_wait(struct call_single_data *csd)
{
- while (smp_load_acquire(&csd->flags) & CSD_FLAG_LOCK)
- cpu_relax();
+ smp_cond_acquire(!(csd->flags & CSD_FLAG_LOCK));
}
static __always_inline void csd_lock(struct call_single_data *csd)
Powered by blists - more mailing lists