[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aHpTuFweA5YFskuC@google.com>
Date: Fri, 18 Jul 2025 07:01:28 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Chao Gao <chao.gao@...el.com>
Cc: Jason Wang <jasowang@...hat.com>, Cindy Lu <lulu@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>, Vitaly Kuznetsov <vkuznets@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, "Kirill A. Shutemov" <kas@...nel.org>, "Xin Li (Intel)" <xin@...or.com>,
Rik van Riel <riel@...riel.com>, "Ahmed S. Darwish" <darwi@...utronix.de>,
"open list:KVM PARAVIRT (KVM/paravirt)" <kvm@...r.kernel.org>,
"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1] kvm: x86: implement PV send_IPI method
On Fri, Jul 18, 2025, Chao Gao wrote:
> On Fri, Jul 18, 2025 at 07:15:37PM +0800, Jason Wang wrote:
> >On Fri, Jul 18, 2025 at 7:01 PM Chao Gao <chao.gao@...el.com> wrote:
> >>
> >> On Fri, Jul 18, 2025 at 03:52:30PM +0800, Jason Wang wrote:
> >> >On Fri, Jul 18, 2025 at 2:25 PM Cindy Lu <lulu@...hat.com> wrote:
> >> >>
> >> >> From: Jason Wang <jasowang@...hat.com>
> >> >>
> >> >> We used to have PV version of send_IPI_mask and
> >> >> send_IPI_mask_allbutself. This patch implements PV send_IPI method to
> >> >> reduce the number of vmexits.
> >>
> >> It won't reduce the number of VM-exits; in fact, it may increase them on CPUs
> >> that support IPI virtualization.
> >
> >Sure, but I wonder if it reduces the vmexits when there's no APICV or
> >L2 VM. I thought it can reduce the 2 vmexits to 1?
>
> Even without APICv, there is just 1 vmexit due to APIC write (xAPIC mode)
> or MSR write (x2APIC mode).
xAPIC will have two exits: ICR2 and then ICR. If xAPIC vs. x2APIC is stable when
kvm_setup_pv_ipi() runs, maybe key off of that?
> >> With IPI virtualization enabled, *unicast* and physical-addressing IPIs won't
> >> cause a VM-exit.
> >
> >Right.
> >
> >> Instead, the microcode posts interrupts directly to the target
> >> vCPU. The PV version always causes a VM-exit.
> >
> >Yes, but it applies to all PV IPI I think.
>
> For multi-cast IPIs, a single hypercall (PV IPI) outperforms multiple ICR
> writes, even when IPI virtualization is enabled.
FWIW, I doubt _all_ multi-cast IPIs outperform IPI virtualization. My guess is
there's a threshold in the number of targets where the cost of sending multiple
virtual IPIs becomes more expensive than the VM-Exit and software processing,
and I assume/hope that threshold isn't '2'.
> >> >> Signed-off-by: Jason Wang <jasowang@...hat.com>
> >> >> Tested-by: Cindy Lu <lulu@...hat.com>
> >> >
> >> >I think a question here is are we able to see performance improvement
> >> >in any kind of setup?
> >>
> >> It may result in a negative performance impact.
> >
> >Userspace can check and enable PV IPI for the case where it suits.
>
> Yeah, we need to identify the cases. One example may be for TDX guests, using
> a PV approach (TDVMCALL) can avoid the #VE cost.
TDX doesn't need a PV approach. Or rather, TDX already has an "architectural"
PV approach. Make a TDVMCALL to request emulation of WRMSR(ICR). Don't plumb
more KVM logic into it.
Powered by blists - more mailing lists