[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCc_LmORNibXBt8V@google.com>
Date: Fri, 16 May 2025 06:35:42 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Mingwei Zhang <mizhang@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>, Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Ian Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>, Liang@...gle.com,
Kan <kan.liang@...ux.intel.com>, "H. Peter Anvin" <hpa@...or.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-kselftest@...r.kernel.org,
Yongwei Ma <yongwei.ma@...el.com>, Xiong Zhang <xiong.y.zhang@...ux.intel.com>,
Dapeng Mi <dapeng1.mi@...ux.intel.com>, Jim Mattson <jmattson@...gle.com>,
Sandipan Das <sandipan.das@....com>, Zide Chen <zide.chen@...el.com>,
Eranian Stephane <eranian@...gle.com>, Shukla Manali <Manali.Shukla@....com>,
Nikunj Dadhania <nikunj.dadhania@....com>
Subject: Re: [PATCH v4 24/38] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot()
On Mon, Mar 24, 2025, Mingwei Zhang wrote:
> Reject PMU MSRs interception explicitly in
> vmx_get_passthrough_msr_slot() since interception of PMU MSRs are
> specially handled in intel_passthrough_pmu_msrs().
>
> Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
> Co-developed-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
> Signed-off-by: Dapeng Mi <dapeng1.mi@...ux.intel.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 12 +++++++++++-
> 1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 38ecf3c116bd..7bb16bed08da 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -165,7 +165,7 @@ module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
>
> /*
> * List of MSRs that can be directly passed to the guest.
> - * In addition to these x2apic, PT and LBR MSRs are handled specially.
> + * In addition to these x2apic, PMU, PT and LBR MSRs are handled specially.
> */
> static u32 vmx_possible_passthrough_msrs[MAX_POSSIBLE_PASSTHROUGH_MSRS] = {
> MSR_IA32_SPEC_CTRL,
> @@ -691,6 +691,16 @@ static int vmx_get_passthrough_msr_slot(u32 msr)
> case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8:
> case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8:
> /* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
> + case MSR_IA32_PMC0 ...
> + MSR_IA32_PMC0 + KVM_MAX_NR_GP_COUNTERS - 1:
> + case MSR_IA32_PERFCTR0 ...
> + MSR_IA32_PERFCTR0 + KVM_MAX_NR_GP_COUNTERS - 1:
> + case MSR_CORE_PERF_FIXED_CTR0 ...
> + MSR_CORE_PERF_FIXED_CTR0 + KVM_MAX_NR_FIXED_COUNTERS - 1:
> + case MSR_CORE_PERF_GLOBAL_STATUS:
> + case MSR_CORE_PERF_GLOBAL_CTRL:
> + case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> + /* PMU MSRs. These are handled in intel_passthrough_pmu_msrs() */
> return -ENOENT;
> }
This belongs in the patch that configures interception. A better split would be
to have an Intel patch and an AMD patch, not three patches with logic splattered
all over.
Powered by blists - more mailing lists