[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191025140528.GB23240@rkaganb.sw.ru>
Date: Fri, 25 Oct 2019 14:05:34 +0000
From: Roman Kagan <rkagan@...tuozzo.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
CC: "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Sasha Levin <sashal@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Michael Kelley <mikelley@...rosoft.com>,
Joe Perches <joe@...ches.com>
Subject: Re: [PATCH v2] x86/hyper-v: micro-optimize send_ipi_one case
On Fri, Oct 25, 2019 at 03:15:46PM +0200, Vitaly Kuznetsov wrote:
> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> Changes since v1:
> - Style changes [Roman, Joe]
> ---
> arch/x86/hyperv/hv_apic.c | 13 ++++++++++---
> arch/x86/include/asm/trace/hyperv.h | 15 +++++++++++++++
> 2 files changed, 25 insertions(+), 3 deletions(-)
Reviewed-by: Roman Kagan <rkagan@...tuozzo.com>
Powered by blists - more mailing lists