[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZDA4nsyAku9B2/58@google.com>
Date: Fri, 7 Apr 2023 08:37:02 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Like Xu <like.xu.linux@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Ravi Bangoria <ravi.bangoria@....com>
Subject: Re: [PATCH V2] KVM: x86/pmu: Disable vPMU if EVENTSEL_GUESTONLY bit
doesn't exist
On Fri, Apr 07, 2023, Like Xu wrote:
> From: Like Xu <likexu@...cent.com>
>
> Unlike Intel's MSR atomic_switch mechanism, AMD supports guest pmu
> basic counter feature by setting the GUESTONLY bit on the host, so the
> presence or absence of this bit determines whether vPMU is emulatable
> (e.g. in nested virtualization). Since on AMD, writing reserved bits of
> EVENTSEL register does not bring #GP, KVM needs to update the global
> enable_pmu value by checking the persistence of this GUESTONLY bit.
This is looking more and more like a bug fix, i.e. needs a Fixes:, no?
> Cc: Ravi Bangoria <ravi.bangoria@....com>
> Signed-off-by: Like Xu <likexu@...cent.com>
> ---
> V1:
> https://lore.kernel.org/kvm/20230307113819.34089-1-likexu@tencent.com
> V1 -> V2 Changelog:
> - Preemption needs to be disabled to ensure a stable CPU; (Sean)
> - KVM should be restoring the original value too; (Sean)
> - Disable vPMU once guest_only mode is not supported; (Sean)
Please respond to my questions, don't just send a new version. When I asked
: Why does lack of AMD64_EVENTSEL_GUESTONLY disable the PMU, but if and only if
: X86_FEATURE_PERFCTR_CORE? E.g. why does the behavior not also apply to legacy
: perfmon support?
I wanted an actual answer because I genuinely do not know what the correct
behavior is.
> - Appreciate any better way to probe for GUESTONLY support;
Again, wait for discussion in previous versions to resolve before posting a new
version. If your answer is "not as far as I know", that's totally fine, but
sending a new version without responding makes it unnecessarily difficult to
track down your "answer". E.g. instead of seeing a very direct "I don't know",
I had to discover that answer by finding a hint buried in the ignored section of
a new patch.
> arch/x86/kvm/svm/svm.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 7584eb85410b..1ab885596510 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4884,6 +4884,20 @@ static __init void svm_adjust_mmio_mask(void)
> kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK);
> }
>
> +static __init bool pmu_has_guestonly_mode(void)
> +{
> + u64 original, value;
> +
> + preempt_disable();
> + rdmsrl(MSR_F15H_PERF_CTL0, original);
What guarantees this MSR actually exists? In v1, it was guarded by enable_pmu=%true,
but that's longer the case. And KVM does
case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
return NULL;
which very strongly suggests this MSR doesn't exist if the CPU supports only the
"legacy" PMU.
> + wrmsrl(MSR_F15H_PERF_CTL0, AMD64_EVENTSEL_GUESTONLY);
> + rdmsrl(MSR_F15H_PERF_CTL0, value);
> + wrmsrl(MSR_F15H_PERF_CTL0, original);
> + preempt_enable();
> +
> + return value == AMD64_EVENTSEL_GUESTONLY;
> +}
> +
> static __init void svm_set_cpu_caps(void)
> {
> kvm_set_cpu_caps();
> @@ -4928,6 +4942,9 @@ static __init void svm_set_cpu_caps(void)
> boot_cpu_has(X86_FEATURE_AMD_SSBD))
> kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD);
>
> + /* Probe for AMD64_EVENTSEL_GUESTONLY support */
I've said this several times recently: use comments to explain _why_ and to call
out subtleties. The code quite obviously is probing for guest-only support, what's
not obvious is why guest-only support is mandatory for vPMU support. It may be
obvious to you, but pease try to view all of this code from the perspective of
someone who has only passing knowledge of the various components, i.e. doesn't
know the gory details of exactly what KVM supports.
Poking around, I see that pmc_reprogram_counter() unconditionally does
.exclude_host = 1,
and amd_core_hw_config()
if (event->attr.exclude_host && event->attr.exclude_guest)
/*
* When HO == GO == 1 the hardware treats that as GO == HO == 0
* and will count in both modes. We don't want to count in that
* case so we emulate no-counting by setting US = OS = 0.
*/
event->hw.config &= ~(ARCH_PERFMON_EVENTSEL_USR |
ARCH_PERFMON_EVENTSEL_OS);
else if (event->attr.exclude_host)
event->hw.config |= AMD64_EVENTSEL_GUESTONLY;
else if (event->attr.exclude_guest)
event->hw.config |= AMD64_EVENTSEL_HOSTONLY;
and so something like this seems appropriate
/*
* KVM requires guest-only event support in order to isolate guest PMCs
* from host PMCs. SVM doesn't provide a way to atomically load MSRs
* on VMRUN, and manually adjusting counts before/after VMRUN is not
* accurate enough to properly virtualize a PMU.
*/
But now I'm really confused, because if I'm reading the code correctly, perf
invokes amd_core_hw_config() for legacy PMUs, i.e. even if PERFCTR_CORE isn't
supported. And the APM documents the host/guest bits only for "Core Performance
Event-Select Registers".
So either (a) GUESTONLY isn't supported on legacy CPUs and perf is relying on AMD
CPUs ignoring reserved bits or (b) GUESTONLY _is_ supported on legacy PMUs and
pmu_has_guestonly_mode() is checking the wrong MSR when running on older CPUs.
And if (a) is true, then how on earth does KVM support vPMU when running on a
legacy PMU? Is vPMU on AMD just wildly broken? Am I missing something?
Powered by blists - more mailing lists