[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191108160858.31665-5-frederic@kernel.org>
Date: Fri, 8 Nov 2019 17:08:58 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <frederic@...nel.org>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [RFC PATCH 4/4] irq_work: Weaken ordering in irq_work_run_list()
(This patch is RFC because it makes the code less clear in favour of
ordering optimization, ordering which has yet to pass under careful
eyes. Not sure it's worth it.)
Clearing IRQ_WORK_PENDING from atomic_fetch_andnot() let us know the
old value of flags that we can reuse in the later cmpxchg() to clear
IRQ_WORK_BUZY if necessary.
However there is no need to read flags atomically here. Its value can't
be concurrently changed before we clear the IRQ_WORK_PENDING bit. So we
can safely read it before the call to atomic_fetch_andnot() which can
then become atomic_andnot() followed by an smp_mb__after_atomic(). That
preserves the ordering that makes sure we see the latest updates
preceding irq_work_claim() while it doesn't raise a new IPI while
observing the work already queued.
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>
---
kernel/irq_work.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 49c53f80a13a..b709ab05cbfd 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -34,8 +34,8 @@ static bool irq_work_claim(struct irq_work *work)
oflags = atomic_fetch_or(IRQ_WORK_CLAIMED, &work->flags);
/*
* If the work is already pending, no need to raise the IPI.
- * The pairing atomic_fetch_andnot() in irq_work_run() makes sure
- * everything we did before is visible.
+ * The pairing atomic_andnot() followed by a barrier in irq_work_run()
+ * makes sure everything we did before is visible.
*/
if (oflags & IRQ_WORK_PENDING)
return false;
@@ -143,7 +143,7 @@ static void irq_work_run_list(struct llist_head *list)
llnode = llist_del_all(list);
llist_for_each_entry_safe(work, tmp, llnode, llnode) {
- int flags;
+ int flags = atomic_read(&work->flags);
/*
* Clear the PENDING bit, after this point the @work
* can be re-used.
@@ -151,14 +151,16 @@ static void irq_work_run_list(struct llist_head *list)
* to claim that work don't rely on us to handle their data
* while we are in the middle of the func.
*/
- flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags);
+ atomic_andnot(IRQ_WORK_PENDING, &work->flags);
+ smp_mb__after_atomic();
work->func(work);
/*
* Clear the BUSY bit and return to the free state if
* no-one else claimed it meanwhile.
*/
- (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY);
+ (void)atomic_cmpxchg(&work->flags, flags & ~IRQ_WORK_PENDING,
+ flags & ~IRQ_WORK_CLAIMED);
}
}
--
2.23.0
Powered by blists - more mailing lists