lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1405730661-9355-6-git-send-email-fweisbec@gmail.com>
Date:	Sat, 19 Jul 2014 02:44:16 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	LKML <linux-kernel@...r.kernel.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Ingo Molnar <mingo@...nel.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Viresh Kumar <viresh.kumar@...aro.org>
Subject: [PATCH 05/10] smp: Fast path check on IPI list

When we enqueue a remote irq work, we trigger the same IPI as those
raised by smp_call_function_*() family.

So when we receive such IPI, we check both irq_work and smp_call_function
queues. Thus if we trigger a remote irq work, we'll likely find the
smp_call_function queue empty unless we collide with concurrent enqueuers
but the probability is low.

Meanwhile, checking the smp_call_function queue can be costly because
we use llist_del_all() which relies on cmpxchg().

We can reduce this overhead by doing a fast path check with llist_empty().
Given the implicit IPI ordering:

       Enqueuer                            Dequeuer
       ---------                           --------
       llist_add(csd, queue)               get_IPI() {
       send_IPI()                              if (llist_empty(queue)
                                                    ...
When the IPI is sent, we are guaranteed that the IPI receiver will
see the new csd.

So lets do the fast path check to optimize non smp_call_function() related
jobs.

Cc: Ingo Molnar <mingo@...nel.org>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Viresh Kumar <viresh.kumar@...aro.org>
Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
---
 kernel/smp.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index a1812d1..34378d4 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -184,11 +184,19 @@ static int generic_exec_single(int cpu, struct call_single_data *csd,
  */
 void generic_smp_call_function_single_interrupt(void)
 {
+	struct llist_head *head = &__get_cpu_var(call_single_queue);
 	struct llist_node *entry;
 	struct call_single_data *csd, *csd_next;
 	static bool warned;
 
-	entry = llist_del_all(&__get_cpu_var(call_single_queue));
+	/*
+	 * Fast check: in case of irq work remote queue, the IPI list
+	 * is likely empty. We can spare the expensive llist_del_all().
+	 */
+	if (llist_empty(head))
+		goto irq_work;
+
+	entry = llist_del_all(head);
 	entry = llist_reverse_order(entry);
 
 	/*
@@ -212,6 +220,7 @@ void generic_smp_call_function_single_interrupt(void)
 		csd_unlock(csd);
 	}
 
+irq_work:
 	/*
 	 * Handle irq works queued remotely by irq_work_queue_on().
 	 * Smp functions above are typically synchronous so they
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ