[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230322095329.GS2017917@hirez.programming.kicks-ass.net>
Date: Wed, 22 Mar 2023 10:53:29 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <vschneid@...hat.com>
Cc: linux-alpha@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-snps-arc@...ts.infradead.org,
linux-arm-kernel@...ts.infradead.org, linux-csky@...r.kernel.org,
linux-hexagon@...r.kernel.org, linux-ia64@...r.kernel.org,
loongarch@...ts.linux.dev, linux-mips@...r.kernel.org,
openrisc@...ts.librecores.org, linux-parisc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-riscv@...ts.infradead.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, linux-xtensa@...ux-xtensa.org,
x86@...nel.org, "Paul E. McKenney" <paulmck@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Juri Lelli <juri.lelli@...hat.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Marc Zyngier <maz@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Russell King <linux@...linux.org.uk>,
Nicholas Piggin <npiggin@...il.com>,
Guo Ren <guoren@...nel.org>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH v5 7/7] sched, smp: Trace smp callback causing an IPI
On Tue, Mar 07, 2023 at 02:35:58PM +0000, Valentin Schneider wrote:
> @@ -477,6 +490,25 @@ static __always_inline void csd_unlock(struct __call_single_data *csd)
> smp_store_release(&csd->node.u_flags, 0);
> }
>
> +static __always_inline void
> +raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t func)
> +{
> + /*
> + * The list addition should be visible to the target CPU when it pops
> + * the head of the list to pull the entry off it in the IPI handler
> + * because of normal cache coherency rules implied by the underlying
> + * llist ops.
> + *
> + * If IPIs can go out of order to the cache coherency protocol
> + * in an architecture, sufficient synchronisation should be added
> + * to arch code to make it appear to obey cache coherency WRT
> + * locking and barrier primitives. Generic code isn't really
> + * equipped to do the right thing...
> + */
> + if (llist_add(node, &per_cpu(call_single_queue, cpu)))
> + send_call_function_single_ipi(cpu, func);
> +}
> +
> static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data);
>
> void __smp_call_single_queue(int cpu, struct llist_node *node)
> @@ -493,21 +525,25 @@ void __smp_call_single_queue(int cpu, struct llist_node *node)
> }
> }
> #endif
> /*
> + * We have to check the type of the CSD before queueing it, because
> + * once queued it can have its flags cleared by
> + * flush_smp_call_function_queue()
> + * even if we haven't sent the smp_call IPI yet (e.g. the stopper
> + * executes migration_cpu_stop() on the remote CPU).
> */
> + if (trace_ipi_send_cpumask_enabled()) {
> + call_single_data_t *csd;
> + smp_call_func_t func;
> +
> + csd = container_of(node, call_single_data_t, node.llist);
> + func = CSD_TYPE(csd) == CSD_TYPE_TTWU ?
> + sched_ttwu_pending : csd->func;
> +
> + raw_smp_call_single_queue(cpu, node, func);
> + } else {
> + raw_smp_call_single_queue(cpu, node, NULL);
> + }
> }
Hurmph... so we only really consume @func when we IPI. Would it not be
more useful to trace this thing for *every* csd enqeued?
Powered by blists - more mailing lists