lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Jan 2021 11:33:48 +0100
From:   Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To:     Leonardo Bras <leobras.c@...il.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Wei Yongjun <weiyongjun1@...wei.com>,
        Qais Yousef <qais.yousef@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] kernel/smp: Split call_single_queue into 3 queues

On 2021-01-28 03:55:06 [-0300], Leonardo Bras wrote:
> Currently, during flush_smp_call_function_queue():
> - All items are transversed once, for inverting.
> - The SYNC items are transversed twice.
> - The ASYNC & IRQ_WORK items are transversed tree times.
> - The TTWU items are transversed four times;.
> 
> Also, a lot of extra work is done to keep track and remove the items
> already processed in each step.
> 
> By using three queues, it's possible to avoid all this work, and
> all items in list are transversed only twice: once for inverting,
> and once for processing..
> 
> In exchange, this requires 2 extra llist_del_all() in the beginning
> of flush_smp_call_function_queue(), and some extra logic to decide
> the correct queue to add the desired csd.
> 
> This is not supposed to cause any change in the order the items are
> processed, but will change the order of printing (cpu offlining)
> to the order the items will be proceessed.
> 
> (The above transversed count ignores the cpu-offlining case, in
> which all items would be transversed again, in both cases.)

Numbers would be good. Having three queues increases the memory foot
print from one pointer to three but we still remain in one cache line.
One difference your patch makes is this hunk:

> +	if (smp_add_to_queue(cpu, node))
>  		send_call_function_single_ipi(cpu);

Previously only the first addition resulted in sending an IPI. With this
change you could send two IPIs, one for adding to two independent queues.

A quick smoke test ended up
          <idle>-0       [005] d..h1..   146.255996: flush_smp_call_function_queue: A1 S2 I0 T0 X3

with the patch at the bottom of the mail. This shows that in my
smoke test at least, the number of items in the individual list is low.

diff --git a/kernel/smp.c b/kernel/smp.c
index 1b6070bf97bb0..3acce385b9f97 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -336,6 +336,11 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
 	struct llist_node *entry, *prev;
 	struct llist_head *head;
 	static bool warned;
+	int num_async = 0;
+	int num_sync = 0;
+	int num_irqw = 0;
+	int num_twu = 0;
+	int total = 0;
 
 	lockdep_assert_irqs_disabled();
 
@@ -343,6 +348,33 @@ static void flush_smp_call_function_queue(bool warn_cpu_offline)
 	entry = llist_del_all(head);
 	entry = llist_reverse_order(entry);
 
+	llist_for_each_entry(csd, entry, node.llist) {
+		switch (CSD_TYPE(csd)) {
+			case CSD_TYPE_ASYNC:
+				num_async++;
+				break;
+			case CSD_TYPE_SYNC:
+				num_sync++;
+				break;
+
+			case CSD_TYPE_IRQ_WORK:
+				num_irqw++;
+				break;
+
+			case CSD_TYPE_TTWU:
+				num_twu++;
+				break;
+
+			default:
+				pr_warn("hmmmm\n");
+				break;
+		}
+	}
+	total = num_async + num_sync + num_irqw + num_twu;
+	if (total > 2)
+		trace_printk("A%d S%d I%d T%d X%d\n", num_async, num_sync, num_irqw, num_twu,
+			     total);
+
 	/* There shouldn't be any pending callbacks on an offline CPU. */
 	if (unlikely(warn_cpu_offline && !cpu_online(smp_processor_id()) &&
 		     !warned && !llist_empty(head))) {

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ