[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c049fdad-14e0-4d03-aa33-9d975374268e@intel.com>
Date: Thu, 31 Oct 2024 08:36:00 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Shivank Garg <shivankg@....com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>
Cc: ardb@...nel.org, bp@...en8.de, brijesh.singh@....com, corbet@....net,
dave.hansen@...ux.intel.com, hpa@...or.com, jan.kiszka@...mens.com,
jgross@...e.com, kbingham@...nel.org, linux-doc@...r.kernel.org,
linux-efi@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
luto@...nel.org, michael.roth@....com, mingo@...hat.com,
peterz@...radead.org, rick.p.edgecombe@...el.com, sandipan.das@....com,
thomas.lendacky@....com, x86@...nel.org
Subject: Re: [PATCH 0/3] x86: Make 5-level paging support unconditional for
x86-64
On 7/31/24 10:45, Shivank Garg wrote:
> It would also be nice to get perf traces. Maybe it is purely SW issue.
Cycle counts aren't going to help much here. For instance, if 5-level
paging makes *ALL* TLB misses slower, you would just see a regression in
any code that misses the TLB, which could show up all over.
On Intel we have some PMU events like this:
dtlb_store_misses.walk_active
[Cycles when at least one PMH is busy
with a page walk for a store]
(there's a load side one as well). If a page walk gets more expensive,
you can see it there. Note that this doesn't actually tell you how much
time the core spent _waiting_ for a page walk to complete. If all the
speculation magic works perfectly in your favor, you could have the PMH
busy 100% of cycles but never had the core waiting on it.
So could we drill down a level in the "perf traces" please, and gather
some of the relevant performance counters and not just cycles?
Powered by blists - more mailing lists