[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQKl7/0I4p0o0TCY@hirez.programming.kicks-ass.net>
Date: Thu, 29 Jul 2021 14:58:23 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Like Xu <like.xu.linux@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>, kvm@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/pmu: Introduce pmc->is_paused to reduce the
call time of perf interfaces
On Wed, Jul 28, 2021 at 08:07:05PM +0800, Like Xu wrote:
> From: Like Xu <likexu@...cent.com>
>
> Based on our observations, after any vm-exit associated with vPMU, there
> are at least two or more perf interfaces to be called for guest counter
> emulation, such as perf_event_{pause, read_value, period}(), and each one
> will {lock, unlock} the same perf_event_ctx. The frequency of calls becomes
> more severe when guest use counters in a multiplexed manner.
>
> Holding a lock once and completing the KVM request operations in the perf
> context would introduce a set of impractical new interfaces. So we can
> further optimize the vPMU implementation by avoiding repeated calls to
> these interfaces in the KVM context for at least one pattern:
>
> After we call perf_event_pause() once, the event will be disabled and its
> internal count will be reset to 0. So there is no need to pause it again
> or read its value. Once the event is paused, event period will not be
> updated until the next time it's resumed or reprogrammed. And there is
> also no need to call perf_event_period twice for a non-running counter,
> considering the perf_event for a running counter is never paused.
>
> Based on this implementation, for the following common usage of
> sampling 4 events using perf on a 4u8g guest:
>
> echo 0 > /proc/sys/kernel/watchdog
> echo 25 > /proc/sys/kernel/perf_cpu_time_max_percent
> echo 10000 > /proc/sys/kernel/perf_event_max_sample_rate
> echo 0 > /proc/sys/kernel/perf_cpu_time_max_percent
> for i in `seq 1 1 10`
> do
> taskset -c 0 perf record \
> -e cpu-cycles -e instructions -e branch-instructions -e cache-misses \
> /root/br_instr a
> done
>
> the average latency of the guest NMI handler is reduced from
> 37646.7 ns to 32929.3 ns (~1.14x speed up) on the Intel ICX server.
> Also, in addition to collecting more samples, no loss of sampling
> accuracy was observed compared to before the optimization.
>
> Signed-off-by: Like Xu <likexu@...cent.com>
Looks sane I suppose.
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
What kinds of VM-exits are the most common?
Powered by blists - more mailing lists