[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCaQedEhZwj9BsVK@google.com>
Date: Thu, 15 May 2025 18:10:17 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Mingwei Zhang <mizhang@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>, Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>, Liang@...gle.com,
Kan <kan.liang@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org,
Yongwei Ma <yongwei.ma@...el.com>, Xiong Zhang <xiong.y.zhang@...ux.intel.com>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>, Jim Mattson <jmattson@...gle.com>,
Sandipan Das <sandipan.das@....com>, Zide Chen <zide.chen@...el.com>,
Eranian Stephane <eranian@...gle.com>, Shukla Manali <Manali.Shukla@....com>,
Nikunj Dadhania <nikunj.dadhania@....com>
Subject: Re: [PATCH v4 30/38] KVM: x86/pmu: Handle emulated instruction for
mediated vPMU
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> static void kvm_pmu_incr_counter(struct kvm_pmc *pmc)
> {
> - pmc->emulated_counter++;
> - kvm_pmu_request_counter_reprogram(pmc);
> + struct kvm_vcpu *vcpu = pmc->vcpu;
> +
> + /*
> + * For perf-based PMUs, accumulate software-emulated events separately
> + * from pmc->counter, as pmc->counter is offset by the count of the
> + * associated perf event. Request reprogramming, which will consult
> + * both emulated and hardware-generated events to detect overflow.
> + */
> + if (!kvm_mediated_pmu_enabled(vcpu)) {
> + pmc->emulated_counter++;
> + kvm_pmu_request_counter_reprogram(pmc);
> + return;
> + }
> +
> + /*
> + * For mediated PMUs, pmc->counter is updated when the vCPU's PMU is
> + * put, and will be loaded into hardware when the PMU is loaded. Simply
> + * increment the counter and signal overflow if it wraps to zero.
> + */
> + pmc->counter = (pmc->counter + 1) & pmc_bitmask(pmc);
> + if (!pmc->counter) {
Ugh, this is broken for the fastpath. If kvm_skip_emulated_instruction() is
invoked by handle_fastpath_set_msr_irqoff() or handle_fastpath_hlt(), KVM may
consume stale information (GLOBAL_CTRL, GLOBAL_STATUS and PMCs), and even if KVM
gets lucky and those are all fresh, the PMC and GLOBAL_STATUS changes won't be
propagated back to hardware.
The best idea I have is to track whether or not the guest may be counting branches
and/or instruction based on eventsels, and then bail from fastpaths that need to
skip instructions. That flag would also be useful to further optimize
kvm_pmu_trigger_event().
> + pmc_to_pmu(pmc)->global_status |= BIT_ULL(pmc->idx);
> + if (pmc_pmi_enabled(pmc))
> + kvm_make_request(KVM_REQ_PMI, vcpu);
> + }
> }
>
> static inline bool cpl_is_matched(struct kvm_pmc *pmc)
> --
> 2.49.0.395.g12beb8f557-goog
>
Powered by blists - more mailing lists