[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALMp9eSQYp-BC_hERH0jzqY1gKU3HLV2YnJDjaAoR7DxRQu=fQ@mail.gmail.com>
Date: Tue, 6 Sep 2022 17:37:33 -0700
From: Jim Mattson <jmattson@...gle.com>
To: Like Xu <like.xu.linux@...il.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jim Mattson <jamttson@...gle.com>,
Kan Liang <kan.liang@...ux.intel.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: x86/pmu: omit "impossible" Intel counter MSRs from
MSR list
On Tue, Sep 6, 2022 at 1:16 AM Like Xu <like.xu.linux@...il.com> wrote:
>
> From: Like Xu <likexu@...cent.com>
>
> According to Intel April 2022 SDM - Table 2-2. IA-32 Architectural MSRs,
> combined with the address reservation ranges of PERFCTRx, EVENTSELy, and
> MSR_IA32_PMCz, the theoretical effective maximum value of the Intel GP
> counters is 14, instead of 18:
>
> 14 = 0xE = min (
> 0xE = IA32_CORE_CAPABILITIES (0xCF) - IA32_PMC0 (0xC1),
> 0xF = IA32_OVERCLOCKING_STATUS (0x195) - IA32_PERFEVTSEL0 (0x186),
> 0xF = IA32_MCG_EXT_CTL (0x4D0) - IA32_A_PMC0 (0x4C1)
> )
>
> the source of the incorrect number may be:
> 18 = 0x12 = IA32_PERF_STATUS (0x198) - IA32_PERFEVTSEL0 (0x186)
> but the range covers IA32_OVERCLOCKING_STATUS, which is also architectural.
> Cut the list to 14 entries to avoid false positives.
>
> Cc: Kan Liang <kan.liang@...ux.intel.com>
> Cc: Jim Mattson <jamttson@...gle.com>
That should be 'jmattson.'
> Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
> Fixes: cf05a67b68b8 ("KVM: x86: omit "impossible" pmu MSRs from MSR list")
I'm not sure I completely agree with the "Fixes," since
IA32_OVERCLOCKING_STATUS didn't exist back then. However, Paolo did
make the incorrect assumption that Intel wouldn't cut the range even
further with the introduction of new MSRs.
To that point, aren't you setting yourself up for a future "Fixes"
referencing this change?
We should probably stop at the maximum number of GP PMCs supported
today (8, I think).
If Intel doubles the number of PMCs to remain competitive with AMD,
they'll probably put PMCs 8-15 in a completely different range of MSR
indices.
> Signed-off-by: Like Xu <likexu@...cent.com>
> ---
> arch/x86/kvm/x86.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 43a6a7efc6ec..98cdd4221447 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1431,8 +1431,6 @@ static const u32 msrs_to_save_all[] = {
> MSR_ARCH_PERFMON_PERFCTR0 + 8, MSR_ARCH_PERFMON_PERFCTR0 + 9,
> MSR_ARCH_PERFMON_PERFCTR0 + 10, MSR_ARCH_PERFMON_PERFCTR0 + 11,
> MSR_ARCH_PERFMON_PERFCTR0 + 12, MSR_ARCH_PERFMON_PERFCTR0 + 13,
> - MSR_ARCH_PERFMON_PERFCTR0 + 14, MSR_ARCH_PERFMON_PERFCTR0 + 15,
> - MSR_ARCH_PERFMON_PERFCTR0 + 16, MSR_ARCH_PERFMON_PERFCTR0 + 17,
> MSR_ARCH_PERFMON_EVENTSEL0, MSR_ARCH_PERFMON_EVENTSEL1,
> MSR_ARCH_PERFMON_EVENTSEL0 + 2, MSR_ARCH_PERFMON_EVENTSEL0 + 3,
> MSR_ARCH_PERFMON_EVENTSEL0 + 4, MSR_ARCH_PERFMON_EVENTSEL0 + 5,
> @@ -1440,8 +1438,6 @@ static const u32 msrs_to_save_all[] = {
> MSR_ARCH_PERFMON_EVENTSEL0 + 8, MSR_ARCH_PERFMON_EVENTSEL0 + 9,
> MSR_ARCH_PERFMON_EVENTSEL0 + 10, MSR_ARCH_PERFMON_EVENTSEL0 + 11,
> MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
> - MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
> - MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
> MSR_IA32_PEBS_ENABLE, MSR_IA32_DS_AREA, MSR_PEBS_DATA_CFG,
>
> MSR_K7_EVNTSEL0, MSR_K7_EVNTSEL1, MSR_K7_EVNTSEL2, MSR_K7_EVNTSEL3,
> @@ -6943,12 +6939,12 @@ static void kvm_init_msr_list(void)
> intel_pt_validate_hw_cap(PT_CAP_num_address_ranges) * 2)
> continue;
> break;
> - case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 17:
> + case MSR_ARCH_PERFMON_PERFCTR0 ... MSR_ARCH_PERFMON_PERFCTR0 + 13:
> if (msrs_to_save_all[i] - MSR_ARCH_PERFMON_PERFCTR0 >=
> min(INTEL_PMC_MAX_GENERIC, kvm_pmu_cap.num_counters_gp))
> continue;
> break;
> - case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 17:
> + case MSR_ARCH_PERFMON_EVENTSEL0 ... MSR_ARCH_PERFMON_EVENTSEL0 + 13:
> if (msrs_to_save_all[i] - MSR_ARCH_PERFMON_EVENTSEL0 >=
> min(INTEL_PMC_MAX_GENERIC, kvm_pmu_cap.num_counters_gp))
> continue;
> --
> 2.37.3
>
Powered by blists - more mailing lists