[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1911072220590.27903@nanos.tec.linutronix.de>
Date: Thu, 7 Nov 2019 22:21:35 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
cc: Sasha Levin <sashal@...nel.org>, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org, x86@...nel.org,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Roman Kagan <rkagan@...tuozzo.com>,
Michael Kelley <mikelley@...rosoft.com>,
Joe Perches <joe@...ches.com>
Subject: Re: [PATCH v3] x86/hyper-v: micro-optimize send_ipi_one case
On Thu, 7 Nov 2019, Vitaly Kuznetsov wrote:
> Vitaly Kuznetsov <vkuznets@...hat.com> writes:
>
> > When sending an IPI to a single CPU there is no need to deal with cpumasks.
> > With 2 CPU guest on WS2019 I'm seeing a minor (like 3%, 8043 -> 7761 CPU
> > cycles) improvement with smp_call_function_single() loop benchmark. The
> > optimization, however, is tiny and straitforward. Also, send_ipi_one() is
> > important for PV spinlock kick.
> >
> > I was also wondering if it would make sense to switch to using regular
> > APIC IPI send for CPU > 64 case but no, it is twice as expesive (12650 CPU
> > cycles for __send_ipi_mask_ex() call, 26000 for orig_apic.send_IPI(cpu,
> > vector)).
> >
> > Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> > ---
> > Changes since v2:
> > - Check VP number instead of CPU number against >= 64 [Michael]
> > - Check for VP_INVAL
>
> Hi Sasha,
>
> do you have plans to pick this up for hyperv-next or should we ask x86
> folks to?
I'm picking up the constant TSC one anyway, so I can just throw that in as
well.
Thanks,
tglx
Powered by blists - more mailing lists