[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0b5dc29-e63f-6ec9-a03f-6435cb3373c6@intel.com>
Date: Fri, 15 Jan 2021 10:49:49 +0800
From: "Xu, Like" <like.xu@...el.com>
To: Sean Christopherson <seanjc@...gle.com>,
Like Xu <like.xu@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Kan Liang <kan.liang@...ux.intel.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, eranian@...gle.com,
kvm@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Andi Kleen <andi@...stfloor.org>, wei.w.wang@...el.com,
luwei.kang@...el.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 04/17] perf: x86/ds: Handle guest PEBS overflow PMI and
inject it to guest
On 2021/1/15 2:55, Sean Christopherson wrote:
> On Mon, Jan 04, 2021, Like Xu wrote:
>> ---
>> arch/x86/events/intel/ds.c | 62 ++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 62 insertions(+)
>>
>> diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
>> index b47cc4226934..c499bdb58373 100644
>> --- a/arch/x86/events/intel/ds.c
>> +++ b/arch/x86/events/intel/ds.c
>> @@ -1721,6 +1721,65 @@ intel_pmu_save_and_restart_reload(struct perf_event *event, int count)
>> return 0;
>> }
>>
>> +/*
>> + * We may be running with guest PEBS events created by KVM, and the
>> + * PEBS records are logged into the guest's DS and invisible to host.
>> + *
>> + * In the case of guest PEBS overflow, we only trigger a fake event
>> + * to emulate the PEBS overflow PMI for guest PBES counters in KVM.
>> + * The guest will then vm-entry and check the guest DS area to read
>> + * the guest PEBS records.
>> + *
>> + * The guest PEBS overflow PMI may be dropped when both the guest and
>> + * the host use PEBS. Therefore, KVM will not enable guest PEBS once
>> + * the host PEBS is enabled since it may bring a confused unknown NMI.
>> + *
>> + * The contents and other behavior of the guest event do not matter.
>> + */
>> +static int intel_pmu_handle_guest_pebs(struct cpu_hw_events *cpuc,
>> + struct pt_regs *iregs,
>> + struct debug_store *ds)
>> +{
>> + struct perf_sample_data data;
>> + struct perf_event *event = NULL;
>> + u64 guest_pebs_idxs = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask;
>> + int bit;
>> +
>> + /*
>> + * Ideally, we should check guest DS to understand if it's
>> + * a guest PEBS overflow PMI from guest PEBS counters.
>> + * However, it brings high overhead to retrieve guest DS in host.
>> + * So we check host DS instead for performance.
>> + *
>> + * If PEBS interrupt threshold on host is not exceeded in a NMI, there
>> + * must be a PEBS overflow PMI generated from the guest PEBS counters.
>> + * There is no ambiguity since the reported event in the PMI is guest
>> + * only. It gets handled correctly on a case by case base for each event.
>> + *
>> + * Note: KVM disables the co-existence of guest PEBS and host PEBS.
> By "KVM", do you mean KVM's loading of the MSRs provided by intel_guest_get_msrs()?
> Because the PMU should really be the entity that controls guest vs. host. KVM
> should just be a dumb pipe that handles the mechanics of how values are context
> switch.
The intel_guest_get_msrs() and atomic_switch_perf_msrs()
will work together to disable the co-existence of guest PEBS and host PEBS:
https://lore.kernel.org/kvm/961e6135-ff6d-86d1-3b7b-a1846ad0e4c4@intel.com/
+
static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
...
if (nr_msrs > 2 && (msrs[1].guest & msrs[0].guest)) {
msrs[2].guest = pmu->ds_area;
if (nr_msrs > 3)
msrs[3].guest = pmu->pebs_data_cfg;
}
for (i = 0; i < nr_msrs; i++)
...
>
> For example, commit 7099e2e1f4d9 ("KVM: VMX: disable PEBS before a guest entry"),
> where KVM does an explicit WRMSR(PEBS_ENABLE) to (attempt to) force PEBS
> quiescence, is flawed in that the PMU can re-enable PEBS after the WRMSR if a
> PMI arrives between the WRMSR and VM-Enter (because VMX can't block NMIs). The
> PMU really needs to be involved in the WRMSR workaround.
Thanks, I will carefully confirm the PEBS quiescent behavior on the ICX.
But we're fine to keep "wrmsrl(MSR_IA32_PEBS_ENABLE, 0);" here
since we will load a new guest value (if any) for this msr later.
>
>> + */
>> + if (!guest_pebs_idxs || !in_nmi() ||
> Are PEBS updates guaranteed to be isolated in both directions on relevant
> hardware?
I think it's true on the ICX.
> By that I mean, will host updates be fully processed before VM-Enter
> compeletes, and guest updates before VM-Exit completes?
The situation is more complicated.
> If that's the case,
> then this path could be optimized to change the KVM invocation of the NMI
> handler so that the "is this a guest PEBS PMI" check is done if and only if the
> PMI originated from with the guest.
When we have a PEBS PMI due to guest workload and vm-exits,
the code path from vm-exit to the host PEBS PMI handler may also
bring PEBS PMI and mark the status bit. The current PMI handler
can't distinguish them and would treat the later one as a suspicious
PMI and output a warning.
This is the main reason why we choose to disable the co-existence
of guest PEBS and host PEBS, and future hardware enhancements
may break this limitation.
---
thx, likexu
>
>> + ds->pebs_index >= ds->pebs_interrupt_threshold)
>> + return 0;
>> +
>> + for_each_set_bit(bit, (unsigned long *)&guest_pebs_idxs,
>> + INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed) {
>> +
>> + event = cpuc->events[bit];
>> + if (!event->attr.precise_ip)
>> + continue;
>> +
>> + perf_sample_data_init(&data, 0, event->hw.last_period);
>> + if (perf_event_overflow(event, &data, iregs))
>> + x86_pmu_stop(event, 0);
>> +
>> + /* Inject one fake event is enough. */
>> + return 1;
>> + }
>> +
>> + return 0;
>> +}
Powered by blists - more mailing lists