[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+Cwic4kV6sWUvDMVb7PRn0kjCW7xLMyN-G7Px+ciDZb9qQ@mail.gmail.com>
Date: Fri, 20 Jul 2018 18:17:40 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Radim Krcmar <rkrcmar@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH v3 2/6] KVM: X86: Implement PV IPIs in linux guest
On Fri, 20 Jul 2018 at 17:51, Radim Krcmar <rkrcmar@...hat.com> wrote:
>
> 2018-07-20 11:33+0800, Wanpeng Li:
> > On Fri, 20 Jul 2018 at 00:28, Radim Krčmář <rkrcmar@...hat.com> wrote:
> > > 2018-07-03 14:21+0800, Wanpeng Li:
> > > But because it is very similar to x2apic, I'd really need some real
> > > performance data to see if this benefits a real workload.
> >
> > Thanks for your review, Radim! :) I will find another real benchmark
> > instead of the micro one to evaluate the performance.
>
> Analyzing the cpu bitmap for every IPI request on a non-small guest (at
> least 32 VCPUs, ideally >256) during various workloads could also
> provide some insight regardless of workload/benchmark result -- we want
> to know how many VM exits we would save.
I will try ebizzy benchmark, just complete the patchset w/ __uint128
which Paolo just suggested. In addition, I remember aliyun posted the
performance number for their online real workload "message oriented
middleware" from 800 k/s vmexits w/ PV IPIs to 150 k/s vmexits w/ PV
IPIs.
Regards,
Wanpeng Li
Powered by blists - more mailing lists