[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C77F4C58-9CA3-4784-AE98-A9D6EDD4A788@vmware.com>
Date: Fri, 5 Jul 2019 01:26:10 +0000
From: Nadav Amit <namit@...are.com>
To: Thomas Gleixner <tglx@...utronix.de>
CC: LKML <linux-kernel@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>,
Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>,
Stephane Eranian <eranian@...gle.com>,
Feng Tang <feng.tang@...el.com>
Subject: Re: [patch V2 21/25] x86/smp: Enhance native_send_call_func_ipi()
> On Jul 4, 2019, at 8:52 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
>
> Nadav noticed that the cpumask allocations in native_send_call_func_ipi()
> are noticeable in microbenchmarks.
>
> Use the new cpumask_or_equal() function to simplify the decision whether
> the supplied target CPU mask is either equal to cpu_online_mask or equal to
> cpu_online_mask except for the CPU on which the function is invoked.
>
> cpumask_or_equal() or's the target mask and the cpumask of the current CPU
> together and compares it to cpu_online_mask.
>
> If the result is false, use the mask based IPI function, otherwise check
> whether the current CPU is set in the target mask and invoke either the
> send_IPI_all() or the send_IPI_allbutselt() APIC callback.
>
> Make the shorthand decision also depend on the static key which enables
> shorthand mode. That allows to remove the extra cpumask comparison with
> cpu_callout_mask.
>
> Reported-by: Nadav Amit <namit@...are.com>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> ---
> V2: New patch
> ---
> arch/x86/kernel/apic/ipi.c | 24 +++++++++++-------------
> 1 file changed, 11 insertions(+), 13 deletions(-)
>
> --- a/arch/x86/kernel/apic/ipi.c
> +++ b/arch/x86/kernel/apic/ipi.c
> @@ -83,23 +83,21 @@ void native_send_call_func_single_ipi(in
>
> void native_send_call_func_ipi(const struct cpumask *mask)
> {
> - cpumask_var_t allbutself;
> + if (static_branch_likely(&apic_use_ipi_shorthand)) {
> + unsigned int cpu = smp_processor_id();
>
> - if (!alloc_cpumask_var(&allbutself, GFP_ATOMIC)) {
> - apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> + if (!cpumask_or_equal(mask, cpumask_of(cpu), cpu_online_mask))
> + goto sendmask;
> +
> + if (cpumask_test_cpu(cpu, mask))
> + apic->send_IPI_all(CALL_FUNCTION_VECTOR);
> + else if (num_online_cpus() > 1)
> + apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
> return;
> }
>
> - cpumask_copy(allbutself, cpu_online_mask);
> - cpumask_clear_cpu(smp_processor_id(), allbutself);
> -
> - if (cpumask_equal(mask, allbutself) &&
> - cpumask_equal(cpu_online_mask, cpu_callout_mask))
> - apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
> - else
> - apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> -
> - free_cpumask_var(allbutself);
> +sendmask:
> + apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
> }
>
> #endif /* CONFIG_SMP */
It does look better and simpler than my solution.
Powered by blists - more mailing lists