[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YGyKsna7CcncX0g6@hirez.programming.kicks-ass.net>
Date: Tue, 6 Apr 2021 18:22:10 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Like Xu <like.xu@...ux.intel.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>, eranian@...gle.com,
andi@...stfloor.org, kan.liang@...ux.intel.com,
wei.w.wang@...el.com, Wanpeng Li <wanpengli@...cent.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH v4 02/16] perf/x86/intel: Handle guest PEBS overflow PMI
for KVM guest
On Mon, Mar 29, 2021 at 01:41:23PM +0800, Like Xu wrote:
> With PEBS virtualization, the guest PEBS records get delivered to the
> guest DS, and the host pmi handler uses perf_guest_cbs->is_in_guest()
> to distinguish whether the PMI comes from the guest code like Intel PT.
>
> No matter how many guest PEBS counters are overflowed, only triggering
> one fake event is enough. The fake event causes the KVM PMI callback to
> be called, thereby injecting the PEBS overflow PMI into the guest.
>
> KVM will inject the PMI with BUFFER_OVF set, even if the guest DS is
> empty. That should really be harmless. Thus the guest PEBS handler would
> retrieve the correct information from its own PEBS records buffer.
>
> Originally-by: Andi Kleen <ak@...ux.intel.com>
> Co-developed-by: Kan Liang <kan.liang@...ux.intel.com>
> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
> Signed-off-by: Like Xu <like.xu@...ux.intel.com>
> ---
> arch/x86/events/intel/core.c | 45 +++++++++++++++++++++++++++++++++++-
> 1 file changed, 44 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 591d60cc8436..af9ac48fe840 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -2747,6 +2747,46 @@ static void intel_pmu_reset(void)
> local_irq_restore(flags);
> }
>
> +/*
> + * We may be running with guest PEBS events created by KVM, and the
> + * PEBS records are logged into the guest's DS and invisible to host.
> + *
> + * In the case of guest PEBS overflow, we only trigger a fake event
> + * to emulate the PEBS overflow PMI for guest PBES counters in KVM.
> + * The guest will then vm-entry and check the guest DS area to read
> + * the guest PEBS records.
> + *
> + * The contents and other behavior of the guest event do not matter.
> + */
> +static int x86_pmu_handle_guest_pebs(struct pt_regs *regs,
> + struct perf_sample_data *data)
> +{
> + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + u64 guest_pebs_idxs = cpuc->pebs_enabled & ~cpuc->intel_ctrl_host_mask;
> + struct perf_event *event = NULL;
> + int bit;
> +
> + if (!x86_pmu.pebs_active || !guest_pebs_idxs)
> + return 0;
> +
> + for_each_set_bit(bit, (unsigned long *)&guest_pebs_idxs,
> + INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed) {
> +
> + event = cpuc->events[bit];
> + if (!event->attr.precise_ip)
> + continue;
> +
> + perf_sample_data_init(data, 0, event->hw.last_period);
> + if (perf_event_overflow(event, data, regs))
> + x86_pmu_stop(event, 0);
> +
> + /* Inject one fake event is enough. */
> + return 1;
> + }
> +
> + return 0;
> +}
Why the return value, it is ignored.
> +
> static int handle_pmi_common(struct pt_regs *regs, u64 status)
> {
> struct perf_sample_data data;
> @@ -2797,7 +2837,10 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
> u64 pebs_enabled = cpuc->pebs_enabled;
>
> handled++;
> - x86_pmu.drain_pebs(regs, &data);
> + if (x86_pmu.pebs_vmx && perf_guest_cbs && perf_guest_cbs->is_in_guest())
> + x86_pmu_handle_guest_pebs(regs, &data);
> + else
> + x86_pmu.drain_pebs(regs, &data);
Why is that else? Since we can't tell if the PMI was for the guest or
for our own DS, we should check both, no?
Powered by blists - more mailing lists