lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <607700F2.9080409@huawei.com>
Date:   Wed, 14 Apr 2021 22:49:22 +0800
From:   Liuxiangdong <liuxiangdong5@...wei.com>
To:     Like Xu <like.xu@...ux.intel.com>
CC:     <andi@...stfloor.org>, "Fangyi (Eric)" <eric.fangyi@...wei.com>,
        Xiexiangyou <xiexiangyou@...wei.com>,
        <kan.liang@...ux.intel.com>, <kvm@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <wei.w.wang@...el.com>,
        <x86@...nel.org>, "Xu, Like" <like.xu@...el.com>
Subject: Re: [PATCH v4 01/16] perf/x86/intel: Add x86_pmu.pebs_vmx for Ice
 Lake Servers

Hi Like,

On 2021/4/9 16:46, Like Xu wrote:
> Hi Liuxiangdong,
>
> On 2021/4/9 16:33, Liuxiangdong (Aven, Cloud Infrastructure Service 
> Product Dept.) wrote:
>> Do you have any comments or ideas about it ?
>>
>> https://lore.kernel.org/kvm/606E5EF6.2060402@huawei.com/
>
> My expectation is that there may be many fewer PEBS samples
> on Skylake without any soft lockup.
>
> You may need to confirm the statement
>
> "All that matters is that the EPT pages don't get
> unmapped ever while PEBS is active"
>
> is true in the kernel level.
>
> Try "-overcommit mem-lock=on" for your qemu.
>

Sorry, in fact, I don't quite understand
"My expectation is that there may be many fewer PEBS samples on Skylake 
without any soft lockup. "

And, I have used "-overcommit mem-lock=on"  when soft lockup happens.


Now, I have tried to configure 1G-hugepages for 2G-mem vm. Each of guest 
numa nodes has 1G mem.
When I use pebs(perf record -e cycles:pp) in guest, there are successful 
pebs samples just for a while and
then I cannot get pebs samples. Host doesn't soft lockup in this process.

Are there something wrong on skylake for we can only get a few samples? 
IRQ?  Or using hugepage is not effecitve?

Thanks!

>>
>>
>> On 2021/4/6 13:14, Xu, Like wrote:
>>> Hi Xiangdong,
>>>
>>> On 2021/4/6 11:24, Liuxiangdong (Aven, Cloud Infrastructure Service 
>>> Product Dept.) wrote:
>>>> Hi,like.
>>>> Some questions about this new pebs patches set:
>>>> https://lore.kernel.org/kvm/20210329054137.120994-2-like.xu@linux.intel.com/ 
>>>>
>>>>
>>>> The new hardware facility supporting guest PEBS is only available
>>>> on Intel Ice Lake Server platforms for now.
>>>
>>> Yes, we have documented this "EPT-friendly PEBS" capability in the SDM
>>> 18.3.10.1 Processor Event Based Sampling (PEBS) Facility
>>>
>>> And again, this patch set doesn't officially support guest PEBS on 
>>> the Skylake.
>>>
>>>>
>>>>
>>>> AFAIK, Icelake supports adaptive PEBS and extended PEBS which 
>>>> Skylake doesn't.
>>>> But we can still use IA32_PEBS_ENABLE MSR to indicate 
>>>> general-purpose counter in Skylake.
>>>
>>> For Skylake, only the PMC0-PMC3 are valid for PEBS and you may
>>> mask the other unsupported bits in the pmu->pebs_enable_mask.
>>>
>>>> Is there anything else that only Icelake supports in this patches set?
>>>
>>> The PDIR counter on the Ice Lake is the fixed counter 0
>>> while the PDIR counter on the Sky Lake is the gp counter 1.
>>>
>>> You may also expose x86_pmu.pebs_vmx for Skylake in the 1st patch.
>>>
>>>>
>>>>
>>>> Besides, we have tried this patches set in Icelake.  We can use 
>>>> pebs(eg: "perf record -e cycles:pp")
>>>> when guest is kernel-5.11, but can't when kernel-4.18.  Is there a 
>>>> minimum guest kernel version requirement?
>>>
>>> The Ice Lake CPU model has been added since v5.4.
>>>
>>> You may double check whether the stable tree(s) code has
>>> INTEL_FAM6_ICELAKE in the arch/x86/include/asm/intel-family.h.
>>>
>>>>
>>>>
>>>> Thanks,
>>>> Xiangdong Liu
>>>
>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ