[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <6cabe626-eab3-a973-f279-917a540f2c1f@amd.com>
Date: Sat, 5 Mar 2022 08:59:01 +0530
From: Ravi Bangoria <ravi.bangoria@....com>
To: Like Xu <like.xu.linux@...il.com>,
Jim Mattson <jmattson@...gle.com>
Cc: seanjc@...gle.com, dave.hansen@...ux.intel.com,
peterz@...radead.org, alexander.shishkin@...ux.intel.com,
eranian@...gle.com, daviddunn@...gle.com, ak@...ux.intel.com,
kan.liang@...ux.intel.com, x86@...nel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, kim.phillips@....com,
santosh.shukla@....com,
"Paolo Bonzini - Distinguished Engineer (kernel-recipes.org) (KVM HoF)"
<pbonzini@...hat.com>, Ravi Bangoria <ravi.bangoria@....com>
Subject: Re: [PATCH 3/3] KVM: x86/pmu: Segregate Intel and AMD specific logic
On 04-Mar-22 3:03 PM, Like Xu wrote:
> On 4/3/2022 2:05 am, Jim Mattson wrote:
>> On Thu, Mar 3, 2022 at 8:25 AM Ravi Bangoria <ravi.bangoria@....com> wrote:
>>>
>>>
>>>
>>> On 03-Mar-22 10:08 AM, Jim Mattson wrote:
>>>> On Mon, Feb 21, 2022 at 2:02 AM Ravi Bangoria <ravi.bangoria@....com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 21-Feb-22 1:27 PM, Like Xu wrote:
>>>>>> On 21/2/2022 3:31 pm, Ravi Bangoria wrote:
>>>>>>> void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx)
>>>>>>> {
>>>>>>> struct kvm_pmc *pmc = kvm_x86_ops.pmu_ops->pmc_idx_to_pmc(pmu, pmc_idx);
>>>>>>> + bool is_intel = !strncmp(kvm_x86_ops.name, "kvm_intel", 9);
>>>>>>
>>>>>> How about using guest_cpuid_is_intel(vcpu)
>>>>>
>>>>> Yeah, that's better then strncmp().
>>>>>
>>>>>> directly in the reprogram_gp_counter() ?
>>>>>
>>>>> We need this flag in reprogram_fixed_counter() as well.
>>>>
>>>> Explicit "is_intel" checks in any form seem clumsy,
>>>
>>> Indeed. However introducing arch specific callback for such tiny
>>> logic seemed overkill to me. So thought to just do it this way.
>>
>> I agree that arch-specific callbacks are ridiculous for these distinctions.
>>
>>>> since we have
>>>> already put some effort into abstracting away the implementation
>>>> differences in struct kvm_pmu. It seems like these differences could
>>>> be handled by adding three masks to that structure: the "raw event
>>>> mask" (i.e. event selector and unit mask), the hsw_in_tx mask, and the
>>>> hsw_in_tx_checkpointed mask.
>>>
>>> struct kvm_pmu is arch independent. You mean struct kvm_pmu_ops?
>>
>> No; I meant exactly what I said. See, for example, how the
>> reserved_bits field is used. It is initialized in the vendor-specific
>> pmu_refresh functions, and from then on, it facilitates
>> vendor-specific behaviors without explicit checks or vendor-specific
>> callbacks. An eventsel_mask field would be a natural addition to this
>> structure, to deal with the vendor-specific event selector widths. The
>> hsw_in_tx_mask and hsw_in_tx_checkpointed_mask fields are less
>> natural, since they will be 0 on AMD, but they would make it simple to
>> write the corresponding code in a vendor-neutral fashion.
>>
>> BTW, am I the only one who finds the HSW_ prefixes a bit absurd here,
>> since TSX was never functional on Haswell?
>
> The TSX story has more twists and turns, but we may start with 3a632cb229bf.
>
>>
>>>>
>>>> These changes should also be coordinated with Like's series that
>>>> eliminates all of the PERF_TYPE_HARDWARE nonsense.
>>>
>>> I'll rebase this on Like's patch series.
>
> I could take over 3nd patch w/ Co-developed-by and move on if Ravi agrees.
Sure. I don't mind.
Thanks,
Ravi
Powered by blists - more mailing lists