[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEvNaKgF7bOPUahaYMi6n2vijAXwFvAhQ22LecZGSC-_bg@mail.gmail.com>
Date: Fri, 18 Jul 2025 19:15:37 +0800
From: Jason Wang <jasowang@...hat.com>
To: Chao Gao <chao.gao@...el.com>
Cc: Cindy Lu <lulu@...hat.com>, Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, "Kirill A. Shutemov" <kas@...nel.org>, "Xin Li (Intel)" <xin@...or.com>,
Rik van Riel <riel@...riel.com>, "Ahmed S. Darwish" <darwi@...utronix.de>,
"open list:KVM PARAVIRT (KVM/paravirt)" <kvm@...r.kernel.org>,
"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1] kvm: x86: implement PV send_IPI method
On Fri, Jul 18, 2025 at 7:01 PM Chao Gao <chao.gao@...el.com> wrote:
>
> On Fri, Jul 18, 2025 at 03:52:30PM +0800, Jason Wang wrote:
> >On Fri, Jul 18, 2025 at 2:25 PM Cindy Lu <lulu@...hat.com> wrote:
> >>
> >> From: Jason Wang <jasowang@...hat.com>
> >>
> >> We used to have PV version of send_IPI_mask and
> >> send_IPI_mask_allbutself. This patch implements PV send_IPI method to
> >> reduce the number of vmexits.
>
> It won't reduce the number of VM-exits; in fact, it may increase them on CPUs
> that support IPI virtualization.
Sure, but I wonder if it reduces the vmexits when there's no APICV or
L2 VM. I thought it can reduce the 2 vmexits to 1?
>
> With IPI virtualization enabled, *unicast* and physical-addressing IPIs won't
> cause a VM-exit.
Right.
> Instead, the microcode posts interrupts directly to the target
> vCPU. The PV version always causes a VM-exit.
Yes, but it applies to all PV IPI I think.
>
> >>
> >> Signed-off-by: Jason Wang <jasowang@...hat.com>
> >> Tested-by: Cindy Lu <lulu@...hat.com>
> >
> >I think a question here is are we able to see performance improvement
> >in any kind of setup?
>
> It may result in a negative performance impact.
Userspace can check and enable PV IPI for the case where it suits.
For example, HyperV did something like:
void __init hv_apic_init(void)
{
if (ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) {
pr_info("Hyper-V: Using IPI hypercalls\n");
/*
* Set the IPI entry points.
*/
orig_apic = *apic;
apic_update_callback(send_IPI, hv_send_ipi);
apic_update_callback(send_IPI_mask, hv_send_ipi_mask);
apic_update_callback(send_IPI_mask_allbutself,
hv_send_ipi_mask_allbutself);
apic_update_callback(send_IPI_allbutself,
hv_send_ipi_allbutself);
apic_update_callback(send_IPI_all, hv_send_ipi_all);
apic_update_callback(send_IPI_self, hv_send_ipi_self);
}
send_IPI_mask is there.
Thanks
>
> >
> >Thanks
> >
> >
>
Powered by blists - more mailing lists