[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1457574936-19065-3-git-send-email-dbueso@suse.de>
Date: Wed, 9 Mar 2016 17:55:36 -0800
From: Davidlohr Bueso <dbueso@...e.de>
To: mingo@...nel.org
Cc: peterz@...radead.org, dave@...olabs.net,
linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH 2/2] kernel/smp: Use make csd_lock_wait be smp_cond_acquire
From: Davidlohr Bueso <dave@...olabs>
We can micro-optimize this call and mildly relax the
barrier requirements by relying on ctrl + rmb, keeping
the acquire semantics. In addition, this is pretty much
the now standard for busy-waiting under such restraints.
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
kernel/smp.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/smp.c b/kernel/smp.c
index c91e00178f8f..74165443c240 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -107,8 +107,7 @@ void __init call_function_init(void)
*/
static __always_inline void csd_lock_wait(struct call_single_data *csd)
{
- while (smp_load_acquire(&csd->flags) & CSD_FLAG_LOCK)
- cpu_relax();
+ smp_cond_acquire(!(csd->flags & CSD_FLAG_LOCK));
}
static __always_inline void csd_lock(struct call_single_data *csd)
--
2.1.4
Powered by blists - more mailing lists