[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180720095135.GA8330@flask>
Date: Fri, 20 Jul 2018 11:51:36 +0200
From: Radim Krcmar <rkrcmar@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH v3 2/6] KVM: X86: Implement PV IPIs in linux guest
2018-07-20 11:33+0800, Wanpeng Li:
> On Fri, 20 Jul 2018 at 00:28, Radim Krčmář <rkrcmar@...hat.com> wrote:
> > 2018-07-03 14:21+0800, Wanpeng Li:
> > But because it is very similar to x2apic, I'd really need some real
> > performance data to see if this benefits a real workload.
>
> Thanks for your review, Radim! :) I will find another real benchmark
> instead of the micro one to evaluate the performance.
Analyzing the cpu bitmap for every IPI request on a non-small guest (at
least 32 VCPUs, ideally >256) during various workloads could also
provide some insight regardless of workload/benchmark result -- we want
to know how many VM exits we would save.
> > > +static void kvm_send_ipi_all(int vector)
> > > +{
> > > + __send_ipi_mask(cpu_online_mask, vector);
> >
> > These should be faster when using the native APIC shorthand -- is this
> > the "Broadcast" in your tests?
>
> Not true, .send_IPI_all almost no callers though linux apic drivers
> implement this hook, in addition, shortcut is not used for x2apic
> mode(__x2apic_send_IPI_dest()), and very limited using in other
> scenarios according to linux apic drivers.
Good point,
(xAPIC is using shorthands, so I didn't expect we'd stop doing so on
x2APIC, but there was probably no need.)
thanks.
Powered by blists - more mailing lists