[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 11 Nov 2019 22:59:14 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wanpeng Li <kernellwp@...il.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Radim Krčmář <rkrcmar@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH 1/2] KVM: X86: Single target IPI fastpath
On 09/11/19 08:05, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@...cent.com>
>
> This patch tries to optimize x2apic physical destination mode, fixed delivery
> mode single target IPI by delivering IPI to receiver immediately after sender
> writes ICR vmexit to avoid various checks when possible.
>
> Testing on Xeon Skylake server:
>
> The virtual IPI latency from sender send to receiver receive reduces more than
> 330+ cpu cycles.
>
> Running hackbench(reschedule ipi) in the guest, the avg handle time of MSR_WRITE
> caused vmexit reduces more than 1000+ cpu cycles:
>
> Before patch:
>
> VM-EXIT Samples Samples% Time% Min Time Max Time Avg time
> MSR_WRITE 5417390 90.01% 16.31% 0.69us 159.60us 1.08us
>
> After patch:
>
> VM-EXIT Samples Samples% Time% Min Time Max Time Avg time
> MSR_WRITE 6726109 90.73% 62.18% 0.48us 191.27us 0.58us
Do you have retpolines enabled? The bulk of the speedup might come just
from the indirect jump.
Paolo
Powered by blists - more mailing lists