[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6076F7E7.5080200@huawei.com>
Date: Wed, 14 Apr 2021 22:10:47 +0800
From: Liuxiangdong <liuxiangdong5@...wei.com>
To: Andi Kleen <andi@...stfloor.org>
CC: Like Xu <like.xu@...ux.intel.com>,
"Fangyi (Eric)" <eric.fangyi@...wei.com>,
Xiexiangyou <xiexiangyou@...wei.com>,
<kan.liang@...ux.intel.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <wei.w.wang@...el.com>,
<x86@...nel.org>, "Xu, Like" <like.xu@...el.com>
Subject: Re: [PATCH v4 01/16] perf/x86/intel: Add x86_pmu.pebs_vmx for Ice
Lake Servers
On 2021/4/12 23:25, Andi Kleen wrote:
>> The reason why soft lockup happens may be the unmapped EPT pages. So, do we
>> have a way to map all gpa
>> before we use pebs on Skylake?
> Can you configure a VT-d device, that will implicitly pin all pages for the
> IOMMU. I *think* that should be enough for testing.
>
> -Andi
Thanks!
But, it doesn't seem to work because host still soft lockup when I
configure a SR-IOV direct network card for vm.
Besides, I have tried to configure 1G-hugepages for 2G-mem vm. Each of
guest numa nodes has 1G mem.
When I use pebs (perf record -e cycles:pp) in guest, there are
successful pebs samples on skylake just for a while and
then I cannot get pebs sample. Host doesn't soft lockup in this process.
Is this method effective? Are there something wrong on skylake for we
can only get a few samples ? Maybe IRQ?
Powered by blists - more mailing lists