lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eTLvJ6GW1mfgjO7CL7tW-79asykyz9=Fb=FfT74VRkDVA@mail.gmail.com>
Date:   Mon, 10 Apr 2023 22:36:23 -0700
From:   Jim Mattson <jmattson@...gle.com>
To:     Like Xu <like.xu.linux@...il.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH V5 05/10] KVM: x86/pmu: Disable vPMU if the minimum num of
 counters isn't met

On Mon, Apr 10, 2023 at 3:51 AM Like Xu <like.xu.linux@...il.com> wrote:
>
> From: Like Xu <likexu@...cent.com>
>
> Disable PMU support when running on AMD and perf reports fewer than four
> general purpose counters. All AMD PMUs must define at least four counters
> due to AMD's legacy architecture hardcoding the number of counters
> without providing a way to enumerate the number of counters to software,
> e.g. from AMD's APM:
>
>  The legacy architecture defines four performance counters (PerfCtrn)
>  and corresponding event-select registers (PerfEvtSeln).
>
> Virtualizing fewer than four counters can lead to guest instability as
> software expects four counters to be available.

I'm confused. Isn't zero less than four?

> Suggested-by: Sean Christopherson <seanjc@...gle.com>
> Signed-off-by: Like Xu <likexu@...cent.com>
> ---
>  arch/x86/kvm/pmu.h | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
> index dd7c7d4ffe3b..002b527360f4 100644
> --- a/arch/x86/kvm/pmu.h
> +++ b/arch/x86/kvm/pmu.h
> @@ -182,6 +182,9 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
>                         enable_pmu = false;
>         }
>
> +       if (!is_intel && kvm_pmu_cap.num_counters_gp < AMD64_NUM_COUNTERS)
> +               enable_pmu = false;
> +
>         if (!enable_pmu) {
>                 memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap));
>                 return;
> --
> 2.40.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ