[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc6d19ec-7ceb-0414-da68-e271466b9b8b@intel.com>
Date: Tue, 18 May 2021 16:44:13 +0800
From: "Xu, Like" <like.xu@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, weijiang.yang@...el.com,
Kan Liang <kan.liang@...ux.intel.com>, ak@...ux.intel.com,
wei.w.wang@...el.com, eranian@...gle.com, liuxiangdong5@...wei.com,
linux-kernel@...r.kernel.org, x86@...nel.org, kvm@...r.kernel.org,
Luwei Kang <luwei.kang@...el.com>,
Like Xu <like.xu@...ux.intel.com>
Subject: Re: [PATCH v6 06/16] KVM: x86/pmu: Add IA32_PEBS_ENABLE MSR emulation
for extended PEBS
On 2021/5/17 16:32, Peter Zijlstra wrote:
> On Tue, May 11, 2021 at 10:42:04AM +0800, Like Xu wrote:
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 2f89fd599842..c791765f4761 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3898,31 +3898,49 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
>> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
>> struct perf_guest_switch_msr *arr = cpuc->guest_switch_msrs;
>> u64 intel_ctrl = hybrid(cpuc->pmu, intel_ctrl);
>> + u64 pebs_mask = (x86_pmu.flags & PMU_FL_PEBS_ALL) ?
>> + cpuc->pebs_enabled : (cpuc->pebs_enabled & PEBS_COUNTER_MASK);
>> +
>> + *nr = 0;
>> + arr[(*nr)++] = (struct perf_guest_switch_msr){
>> + .msr = MSR_CORE_PERF_GLOBAL_CTRL,
>> + .host = intel_ctrl & ~cpuc->intel_ctrl_guest_mask,
>> + .guest = intel_ctrl & (~cpuc->intel_ctrl_host_mask | ~pebs_mask),
>> + };
>>
>> + if (!x86_pmu.pebs)
>> + return arr;
>>
>> + /*
>> + * If PMU counter has PEBS enabled it is not enough to
>> + * disable counter on a guest entry since PEBS memory
>> + * write can overshoot guest entry and corrupt guest
>> + * memory. Disabling PEBS solves the problem.
>> + *
>> + * Don't do this if the CPU already enforces it.
>> + */
>> + if (x86_pmu.pebs_no_isolation) {
>> + arr[(*nr)++] = (struct perf_guest_switch_msr){
>> + .msr = MSR_IA32_PEBS_ENABLE,
>> + .host = cpuc->pebs_enabled,
>> + .guest = 0,
>> + };
>> + return arr;
>> }
>>
>> + if (!x86_pmu.pebs_vmx)
>> + return arr;
>> +
>> + arr[*nr] = (struct perf_guest_switch_msr){
>> + .msr = MSR_IA32_PEBS_ENABLE,
>> + .host = cpuc->pebs_enabled & ~cpuc->intel_ctrl_guest_mask,
>> + .guest = pebs_mask & ~cpuc->intel_ctrl_host_mask,
>> + };
>> +
>> + /* Set hw GLOBAL_CTRL bits for PEBS counter when it runs for guest */
>> + arr[0].guest |= arr[*nr].guest;
>> +
>> + ++(*nr);
>> return arr;
>> }
> ISTR saying I was confused as heck by this function, I still don't see
> clarifying comments :/
>
> What's .host and .guest ?
Will adding the following comments help you ?
+/*
+ * Currently, the only caller of this function is the
atomic_switch_perf_msrs().
+ * The host perf conext helps to prepare the values of the real hardware for
+ * a set of msrs that need to be switched atomically in a vmx transaction.
+ *
+ * For example, the pseudocode needed to add a new msr should look like:
+ *
+ * arr[(*nr)++] = (struct perf_guest_switch_msr){
+ * .msr = the hardware msr address,
+ * .host = the value the hardware has when it doesn't run a guest,
+ * .guest = the value the hardware has when it runs a guest,
+ * };
+ *
+ * These values have nothing to do with the emulated values the guest sees
+ * when it uses {RD,WR}MSR, which should be handled in the KVM context.
+ */
static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void
*data)
Powered by blists - more mailing lists