[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DM5PR21MB0137360776CEDD19E6FECC75D7670@DM5PR21MB0137.namprd21.prod.outlook.com>
Date: Sun, 27 Oct 2019 16:49:31 +0000
From: Michael Kelley <mikelley@...rosoft.com>
To: vkuznets <vkuznets@...hat.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Sasha Levin <sashal@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Roman Kagan <rkagan@...tuozzo.com>,
Joe Perches <joe@...ches.com>
Subject: RE: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case
From: Vitaly Kuznetsov <vkuznets@...hat.com> Sent: Sunday, October 27, 2019 8:20 AM
>
> When sending an IPI to a single CPU there is no need to deal with cpumasks.
> With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> cycles) improvement with smp_call_function_single() loop benchmark. The
> optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> important for PV spinlock kick.
>
> I was also wondering if it would make sense to switch to using regular
> APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> vector)).
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> Changes since v2:
> - Check VP number instead of CPU number against >= 64 [Michael]
> - Check for VP_INVAL
>
Reviewed-by: Michael Kelley <mikelley@...rosoft.com>
Powered by blists - more mailing lists