[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15a29679-ea0b-d1c8-1a1f-698d3db35293@intel.com>
Date: Thu, 28 Jan 2021 08:41:26 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Zi Yan <ziy@...dia.com>, Saravanan D <saravanand@...com>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>
Cc: x86@...nel.org, dave.hansen@...ux.intel.com, luto@...nel.org,
peterz@...radead.org, corbet@....net, willy@...radead.org,
linux-kernel@...r.kernel.org, kernel-team@...com,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
songliubraving@...com
Subject: Re: [PATCH V5] x86/mm: Tracking linear mapping split events
On 1/28/21 8:33 AM, Zi Yan wrote:
>> One of the many lasting (as we don't coalesce back) sources for
>> huge page splits is tracing as the granular page
>> attribute/permission changes would force the kernel to split code
>> segments mapped to huge pages to smaller ones thereby increasing
>> the probability of TLB miss/reload even after tracing has been
>> stopped.
> It is interesting to see this statement saying splitting kernel
> direct mappings causes performance loss, when Zhengjun (cc’d) from
> Intel recently posted a kernel direct mapping performance report[1]
> saying 1GB mappings are good but not much better than 2MB and 4KB
> mappings.
No, that's not what the report said.
*Overall*, there is no clear winner between 4k, 2M and 1G. In other
words, no one page size is best for *ALL* workloads.
There were *ABSOLUTELY* individual workloads in those tests that saw
significant deltas between the direct map sizes. There are also
real-world workloads that feel the impact here.
Powered by blists - more mailing lists