[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhhXxV3Z1UHLp1M1@google.com>
Date: Thu, 11 Apr 2024 14:36:05 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Xiong Zhang <xiong.y.zhang@...ux.intel.com>
Cc: pbonzini@...hat.com, peterz@...radead.org, mizhang@...gle.com,
kan.liang@...el.com, zhenyuw@...ux.intel.com, dapeng1.mi@...ux.intel.com,
jmattson@...gle.com, kvm@...r.kernel.org, linux-perf-users@...r.kernel.org,
linux-kernel@...r.kernel.org, zhiyuan.lv@...el.com, eranian@...gle.com,
irogers@...gle.com, samantha.alt@...el.com, like.xu.linux@...il.com,
chao.gao@...el.com
Subject: Re: [RFC PATCH 24/41] KVM: x86/pmu: Zero out unexposed
Counters/Selectors to avoid information leakage
On Fri, Jan 26, 2024, Xiong Zhang wrote:
> From: Mingwei Zhang <mizhang@...gle.com>
>
> Zero out unexposed counters/selectors because even though KVM intercepts
> all accesses to unexposed PMU MSRs, it does pass through RDPMC instruction
> which allows guest to read all GP counters and fixed counters. So, zero out
> unexposed counter values which might contain critical information for the
> host.
This belongs in the previous patch, it's effectively a bug fix. I appreciate
the push for finer granularity, but introducing a blatant bug and then immediately
fixing it goes too far.
> Signed-off-by: Mingwei Zhang <mizhang@...gle.com>
> ---
> arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index f79bebe7093d..4b4da7f17895 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -895,11 +895,27 @@ static void intel_restore_pmu_context(struct kvm_vcpu *vcpu)
> wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, pmc->eventsel);
> }
>
> + /*
> + * Zero out unexposed GP counters/selectors to avoid information leakage
> + * since passthrough PMU does not intercept RDPMC.
Zeroing the selectors is unnecessary. KVM still intercepts MSR_CORE_PERF_GLOBAL_CTRL,
so just ensure the PMCs that aren't exposed the guest are globally enabled.
> + */
> + for (i = pmu->nr_arch_gp_counters; i < kvm_pmu_cap.num_counters_gp; i++) {
> + wrmsrl(MSR_IA32_PMC0 + i, 0);
> + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, 0);
> + }
> +
> wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl);
> for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
> pmc = &pmu->fixed_counters[i];
> wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, pmc->counter);
> }
> +
> + /*
> + * Zero out unexposed fixed counters to avoid information leakage
> + * since passthrough PMU does not intercept RDPMC.
I would call out that RDPMC interception is all or nothing, i.e. KVM can't
selectively intercept _some_ PMCs, and the MSR bitmaps don't apply to RDPMC.
> + */
> + for (i = pmu->nr_arch_fixed_counters; i < kvm_pmu_cap.num_counters_fixed; i++)
> + wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 0);
> }
>
> struct kvm_pmu_ops intel_pmu_ops __initdata = {
> --
> 2.34.1
>
Powered by blists - more mailing lists