[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87a11cd6-55aa-025c-2e74-7dc91d82798a@gmail.com>
Date: Fri, 19 Nov 2021 19:10:09 +0800
From: Like Xu <like.xu.linux@...il.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH 3/4] KVM: x86/pmu: Reuse find_perf_hw_id() and drop
find_fixed_event()
On 19/11/2021 6:29 pm, Paolo Bonzini wrote:
> On 11/19/21 08:16, Like Xu wrote:
>>
>> It's proposed to get [V2] merged and continue to review the fixes from [1]
>> seamlessly,
>> and then further unify all fixed/gp stuff including intel_find_fixed_event()
>> as a follow up.
>
> I agree and I'll review it soon. Though, why not add the
>
> + && (pmc_is_fixed(pmc) ||
> + pmu->available_event_types & (1 << i)))
>
If we have a fixed ctr 0 for "retired instructions" event
but the bit 01 of the guest CPUID 0AH.EBX leaf is masked,
thus in that case, we got true from "pmc_is_fixed(pmc)"
and false from "pmu->available_event_types & (1 << i)",
thus it will break and continue to program a perf_event for pmc.
(SDM says, Bit 01: Instruction retired event not available if 1 or if EAX[31:24]<2.)
But the right behavior is that KVM should not program perf_event
for this pmc since this event should not be available (whether it's gp or fixed)
and the counter msr pair can be accessed but does not work.
The proposal final code may look like :
/* UMask and Event Select Encodings for Intel CPUID Events */
static inline bool is_intel_cpuid_event(u8 event_select, u8 unit_mask)
{
if ((!unit_mask && event_select == 0x3C) ||
(!unit_mask && event_select == 0xC0) ||
(unit_mask == 0x01 && event_select == 0x3C) ||
(unit_mask == 0x4F && event_select == 0x2E) ||
(unit_mask == 0x41 && event_select == 0x2E) ||
(!unit_mask && event_select == 0xC4) ||
(!unit_mask && event_select == 0xC5))
return true;
/* the unimplemented topdown.slots event check is kipped. */
return false;
}
static unsigned int intel_pmc_perf_hw_id(struct kvm_pmc *pmc)
{
struct kvm_pmu *pmu = pmc_to_pmu(pmc);
u8 event_select = pmc->eventsel & ARCH_PERFMON_EVENTSEL_EVENT;
u8 unit_mask = (pmc->eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
int i;
for (i = 0; i < PERF_COUNT_HW_MAX; i++) {
if (kernel_generic_events[i].eventsel != event_select ||
kernel_generic_events[i].unit_mask != unit_mask)
continue;
if (is_intel_cpuid_event(event_select, unit_mask) &&
!test_bit(i, pmu->avail_cpuid_events))
return PERF_COUNT_HW_MAX + 1;
break;
}
return (i == PERF_COUNT_HW_MAX) ? i : kernel_generic_events[i].event_type;
}
> version in v2 of this patch? :)
>
> Paolo
>
>> [1] https://lore.kernel.org/kvm/20211112095139.21775-1-likexu@tencent.com/
>> [V2] https://lore.kernel.org/kvm/20211119064856.77948-1-likexu@tencent.com/
>
Powered by blists - more mailing lists