[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X/TV9nZw49XFwDF/@google.com>
Date: Tue, 5 Jan 2021 13:11:18 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Like Xu <like.xu@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Paolo Bonzini <pbonzini@...hat.com>, eranian@...gle.com,
kvm@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Andi Kleen <andi@...stfloor.org>,
Kan Liang <kan.liang@...ux.intel.com>, wei.w.wang@...el.com,
luwei.kang@...el.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 06/17] KVM: x86/pmu: Add IA32_PEBS_ENABLE MSR
emulation for extended PEBS
On Mon, Jan 04, 2021, Like Xu wrote:
> If IA32_PERF_CAPABILITIES.PEBS_BASELINE [bit 14] is set, the
> IA32_PEBS_ENABLE MSR exists and all architecturally enumerated fixed
> and general purpose counters have corresponding bits in IA32_PEBS_ENABLE
> that enable generation of PEBS records. The general-purpose counter bits
> start at bit IA32_PEBS_ENABLE[0], and the fixed counter bits start at
> bit IA32_PEBS_ENABLE[32].
>
> When guest PEBS is enabled, the IA32_PEBS_ENABLE MSR will be
> added to the perf_guest_switch_msr() and switched during the
> VMX transitions just like CORE_PERF_GLOBAL_CTRL MSR.
>
> Originally-by: Andi Kleen <ak@...ux.intel.com>
> Co-developed-by: Kan Liang <kan.liang@...ux.intel.com>
> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
> Co-developed-by: Luwei Kang <luwei.kang@...el.com>
> Signed-off-by: Luwei Kang <luwei.kang@...el.com>
> Signed-off-by: Like Xu <like.xu@...ux.intel.com>
> ---
> arch/x86/events/intel/core.c | 20 ++++++++++++++++++++
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/include/asm/msr-index.h | 6 ++++++
> arch/x86/kvm/vmx/pmu_intel.c | 28 ++++++++++++++++++++++++++++
> 4 files changed, 55 insertions(+)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index af457f8cb29d..6453b8a6834a 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3715,6 +3715,26 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr)
> *nr = 2;
> }
>
> + if (cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask) {
> + arr[1].msr = MSR_IA32_PEBS_ENABLE;
> + arr[1].host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask;
> + arr[1].guest = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask;
> + /*
> + * The guest PEBS will be disabled once the host PEBS is enabled
> + * since the both enabled case may brings a unknown PMI to
> + * confuse host and the guest PEBS overflow PMI would be missed.
> + */
> + if (arr[1].host)
> + arr[1].guest = 0;
> + arr[0].guest |= arr[1].guest;
Can't you modify the code that strips the PEBS counters from the guest's
value instead of poking into the array entry after the fact?
Also, why is this scenario even allowed? Can't we force exclude_guest for
events that use PEBS?
> + *nr = 2;
> + } else if (*nr == 1) {
> + /* Remove MSR_IA32_PEBS_ENABLE from MSR switch list in KVM */
> + arr[1].msr = MSR_IA32_PEBS_ENABLE;
> + arr[1].host = arr[1].guest = 0;
> + *nr = 2;
Similar to above, rather then check "*nr == 1", this should properly integrate
with the "x86_pmu.pebs && x86_pmu.pebs_no_isolation" logic instead of poking
into the array after the fact.
By incorporating both suggestions, the logic can be streamlined significantly,
and IMO makes the overall flow much more understandable. Untested...
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index d4569bfa83e3..c5cc7e558c8e 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3708,24 +3708,39 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr)
arr[0].msr = MSR_CORE_PERF_GLOBAL_CTRL;
arr[0].host = x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_guest_mask;
arr[0].guest = x86_pmu.intel_ctrl & ~cpuc->intel_ctrl_host_mask;
- if (x86_pmu.flags & PMU_FL_PEBS_ALL)
- arr[0].guest &= ~cpuc->pebs_enabled;
- else
- arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
+
+ /*
+ * Disable PEBS in the guest if PEBS is used by the host; enabling PEBS
+ * in both will lead to unexpected PMIs in the host and/or missed PMIs
+ * in the guest.
+ */
+ if (cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask) {
+ if (x86_pmu.flags & PMU_FL_PEBS_ALL)
+ arr[0].guest &= ~cpuc->pebs_enabled;
+ else
+ arr[0].guest &= ~(cpuc->pebs_enabled & PEBS_COUNTER_MASK);
+ }
*nr = 1;
- if (x86_pmu.pebs && x86_pmu.pebs_no_isolation) {
- /*
- * If PMU counter has PEBS enabled it is not enough to
- * disable counter on a guest entry since PEBS memory
- * write can overshoot guest entry and corrupt guest
- * memory. Disabling PEBS solves the problem.
- *
- * Don't do this if the CPU already enforces it.
- */
+ if (x86_pmu.pebs) {
arr[1].msr = MSR_IA32_PEBS_ENABLE;
- arr[1].host = cpuc->pebs_enabled;
- arr[1].guest = 0;
+ arr[1].host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask;
+
+ /*
+ * Host and guest PEBS are mutually exclusive. Load the guest
+ * value iff PEBS is disabled in the host. If PEBS is enabled
+ * in the host and the CPU supports PEBS isolation, disabling
+ * the counters is sufficient (see above); skip the MSR loads
+ * by stuffing guest=host (KVM will remove the entry). Without
+ * isolation, PEBS must be explicitly disabled prior to
+ * VM-Enter to prevent PEBS writes from overshooting VM-Enter.
+ */
+ if (!arr[1].host)
+ arr[1].guest = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask;
+ else if (x86_pmu.pebs_no_isolation)
+ arr[1].guest = 0;
+ else
+ arr[1].guest = arr[1].host;
*nr = 2;
}
Powered by blists - more mailing lists