[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cwv5jqxBW=Ss5nkX7kZM3_Y-Ucs66yx5+wN09=W4pUdzA@mail.gmail.com>
Date: Tue, 11 Jun 2019 09:45:33 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
<sean.j.christopherson@...el.com> wrote:
>
> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> > 2019-05-30 09:05+0800, Wanpeng Li:
> > > The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > > yield if any of the IPI target vCPUs was preempted. 17% performance
> > > increasement of ebizzy benchmark can be observed in an over-subscribe
> > > environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > > IPI-many since call-function is not easy to be trigged by userspace
> > > workload).
> >
> > Have you checked if we could gain performance by having the yield as an
> > extension to our PV IPI call?
> >
> > It would allow us to skip the VM entry/exit overhead on the caller.
> > (The benefit of that might be negligible and it also poses a
> > complication when splitting the target mask into several PV IPI
> > hypercalls.)
>
> Tangetially related to splitting PV IPI hypercalls, are there any major
> hurdles to supporting shorthand? Not having to generate the mask for
> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> shave cycles for affected flows.
Not sure why shorthand is not used for native x2apic mode.
Regards,
Wanpeng Li
Powered by blists - more mailing lists