lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Jan 2019 09:08:55 -0500
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Wei Wang <wei.w.wang@...el.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, pbonzini@...hat.com, ak@...ux.intel.com,
        peterz@...radead.org
Cc:     kan.liang@...el.com, mingo@...hat.com, rkrcmar@...hat.com,
        like.xu@...el.com, jannh@...gle.com, arei.gonglei@...wei.com
Subject: Re: [PATCH v4 04/10] KVM/x86: intel_pmu_lbr_enable



On 1/8/2019 1:13 AM, Wei Wang wrote:
> On 01/07/2019 10:22 PM, Liang, Kan wrote:
>>
>>> Thanks for sharing. I understand the point of maintaining those 
>>> models at one place,
>>> but this factor-out doesn't seem very elegant to me, like below
>>>
>>> __intel_pmu_init (int model, struct x86_pmu *x86_pmu)
>>> {
>>> ...
>>> switch (model)
>>> case INTEL_FAM6_NEHALEM:
>>> case INTEL_FAM6_NEHALEM_EP:
>>> case INTEL_FAM6_NEHALEM_EX:
>>>      intel_pmu_lbr_init(x86_pmu);
>>>      if (model != boot_cpu_data.x86_model)
>>>          return;
>>>
>>>      /* Other a lot of things init like below..*/
>>>      memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids,
>>>                     sizeof(hw_cache_event_ids));
>>>      memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs,
>>>                     sizeof(hw_cache_extra_regs));
>>>      x86_pmu.event_constraints = intel_nehalem_event_constraints;
>>>                  x86_pmu.pebs_constraints = 
>>> intel_nehalem_pebs_event_constraints;
>>>                  x86_pmu.enable_all = intel_pmu_nhm_enable_all;
>>>                  x86_pmu.extra_regs = intel_nehalem_extra_regs;
>>>   ...
>>>
>>> Case...
>>> }
>>> We need insert "if (model != boot_cpu_data.x86_model)" in every "Case 
>>> xx".
>>>
>>> What would be the rationale that we only do lbr_init for "x86_pmu"
>>> when model != boot_cpu_data.x86_model?
>>> (It looks more like a workaround to factor-out the function and get 
>>> what we want)
>>
>> I thought the new function may be extended to support fake pmu as below.
>> It's not only for lbr. PMU has many CPU specific features. It can be 
>> used for other features, if you want to check the compatibility in 
>> future. But I don't have an example now.
>>
>> __intel_pmu_init (int model, struct x86_pmu *x86_pmu)
>> {
>> bool fake_pmu = (model != boot_cpu_data.x86_model) ? true : false;
>> ...
>> switch (model)
>> case INTEL_FAM6_NEHALEM:
>> case INTEL_FAM6_NEHALEM_EP:
>> case INTEL_FAM6_NEHALEM_EX:
>>      intel_pmu_lbr_init(x86_pmu);
>>      x86_pmu->event_constraints = intel_nehalem_event_constraints;
>>      x86_pmu->pebs_constraints = intel_nehalem_pebs_event_constraints;
>>      x86_pmu->enable_all = intel_pmu_nhm_enable_all;
>>      x86_pmu->extra_regs = intel_nehalem_extra_regs;
>>
>>      if (fake_pmu)
>>          return;
> 
> It looks similar as the one I shared above, the difference is that more 
> things
> (e.g. constraints) are assigned to x86_fake_pmu.
> I'm not sure about the logic behind it (still look like a workaround).

The fake x86_pmu will include all the supported features in host. If you 
want to check other features in future, it would be useful.

> 
> 
> 
>>
>>      /* Global variables should not be updated for fake PMU */
>>      memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids,
>>                     sizeof(hw_cache_event_ids));
>>      memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs,
>>                     sizeof(hw_cache_extra_regs));
>>
>>
>>>
>>> I would prefer having them separated as this patch for now - it is 
>>> logically more clear to me.
>>>
>>
>> But it will be a problem for maintenance. Perf developer probably 
>> forget to update the list in KVM. I think you have to regularly check 
>> the perf code.
>>
> 
> It's been very common in hypervisor development. That's why we have 
> hypervisor developers here.
> When a new platform is added, we will definitely get some work like this 
> to do.
>

If that's part of your job, I'm OK with it.

Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ