[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZiGWmCgu8fGZHULu@google.com>
Date: Thu, 18 Apr 2024 21:54:32 +0000
From: Mingwei Zhang <mizhang@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Xiong Zhang <xiong.y.zhang@...ux.intel.com>, pbonzini@...hat.com,
peterz@...radead.org, kan.liang@...el.com, zhenyuw@...ux.intel.com,
dapeng1.mi@...ux.intel.com, jmattson@...gle.com,
kvm@...r.kernel.org, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org, zhiyuan.lv@...el.com,
eranian@...gle.com, irogers@...gle.com, samantha.alt@...el.com,
like.xu.linux@...il.com, chao.gao@...el.com
Subject: Re: [RFC PATCH 40/41] KVM: x86/pmu: Separate passthrough PMU logic
in set/get_msr() from non-passthrough vPMU
On Thu, Apr 11, 2024, Sean Christopherson wrote:
> On Fri, Jan 26, 2024, Xiong Zhang wrote:
> > From: Mingwei Zhang <mizhang@...gle.com>
> >
> > Separate passthrough PMU logic from non-passthrough vPMU code. There are
> > two places in passthrough vPMU when set/get_msr() may call into the
> > existing non-passthrough vPMU code: 1) set/get counters; 2) set global_ctrl
> > MSR.
> >
> > In the former case, non-passthrough vPMU will call into
> > pmc_{read,write}_counter() which wires to the perf API. Update these
> > functions to avoid the perf API invocation.
> >
> > The 2nd case is where global_ctrl MSR writes invokes reprogram_counters()
> > which will invokes the non-passthrough PMU logic. So use pmu->passthrough
> > flag to wrap out the call.
> >
> > Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
> > ---
> > arch/x86/kvm/pmu.c | 4 +++-
> > arch/x86/kvm/pmu.h | 10 +++++++++-
> > 2 files changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index 9e62e96fe48a..de653a67ba93 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -652,7 +652,9 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> > if (pmu->global_ctrl != data) {
> > diff = pmu->global_ctrl ^ data;
> > pmu->global_ctrl = data;
> > - reprogram_counters(pmu, diff);
> > + /* Passthrough vPMU never reprogram counters. */
> > + if (!pmu->passthrough)
>
> This should probably be handled in reprogram_counters(), otherwise we'll be
> playing whack-a-mole, e.g. this misses MSR_IA32_PEBS_ENABLE, which benign, but
> only because PEBS isn't yet supported.
>
> > + reprogram_counters(pmu, diff);
> > }
> > break;
> > case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> > diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> > index 0fc37a06fe48..ab8d4a8e58a8 100644
> > --- a/arch/x86/kvm/pmu.h
> > +++ b/arch/x86/kvm/pmu.h
> > @@ -70,6 +70,9 @@ static inline u64 pmc_read_counter(struct kvm_pmc *pmc)
> > u64 counter, enabled, running;
> >
> > counter = pmc->counter;
> > + if (pmc_to_pmu(pmc)->passthrough)
> > + return counter & pmc_bitmask(pmc);
>
> Won't perf_event always be NULL for mediated counters? I.e. this can be dropped,
> I think.
yeah. I double checked and seems when perf_event == NULL, the logic is
correct. If so, we can drop that.
Thanks.
-Mingwei
Powered by blists - more mailing lists