[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cz6vX587uLV__FheXuiOe7pzfGeUZb++ZJ1y9Cmk6GkoA@mail.gmail.com>
Date: Fri, 28 Jun 2019 15:29:58 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH v4 0/3] KVM: Yield to IPI target if necessary
ping again,
On Tue, 18 Jun 2019 at 17:00, Wanpeng Li <kernellwp@...il.com> wrote:
>
> ping, :)
> On Tue, 11 Jun 2019 at 20:23, Wanpeng Li <kernellwp@...il.com> wrote:
> >
> > The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > yield if any of the IPI target vCPUs was preempted. 17% performance
> > increasement of ebizzy benchmark can be observed in an over-subscribe
> > environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > IPI-many since call-function is not easy to be trigged by userspace
> > workload).
> >
> > v3 -> v4:
> > * check map->phys_map[dest_id]
> > * more cleaner kvm_sched_yield()
> >
> > v2 -> v3:
> > * add bounds-check on dest_id
> >
> > v1 -> v2:
> > * check map is not NULL
> > * check map->phys_map[dest_id] is not NULL
> > * make kvm_sched_yield static
> > * change dest_id to unsinged long
> >
> > Wanpeng Li (3):
> > KVM: X86: Yield to IPI target if necessary
> > KVM: X86: Implement PV sched yield hypercall
> > KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest
> >
> > Documentation/virtual/kvm/cpuid.txt | 4 ++++
> > Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
> > arch/x86/include/uapi/asm/kvm_para.h | 1 +
> > arch/x86/kernel/kvm.c | 21 +++++++++++++++++++++
> > arch/x86/kvm/cpuid.c | 3 ++-
> > arch/x86/kvm/x86.c | 21 +++++++++++++++++++++
> > include/uapi/linux/kvm_para.h | 1 +
> > 7 files changed, 61 insertions(+), 1 deletion(-)
> >
> > --
> > 2.7.4
> >
Powered by blists - more mailing lists