[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e73d6af-47a8-43bf-8ffa-9525bc8c747b@redhat.com>
Date: Tue, 31 Oct 2023 09:46:55 +0100
From: David Hildenbrand <david@...hat.com>
To: Byungchul Park <byungchul@...com>,
Dave Hansen <dave.hansen@...el.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel_team@...ynix.com, akpm@...ux-foundation.org,
ying.huang@...el.com, namit@...are.com, xhao@...ux.alibaba.com,
mgorman@...hsingularity.net, hughd@...gle.com, willy@...radead.org,
peterz@...radead.org, luto@...nel.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com
Subject: Re: [v3 0/3] Reduce TLB flushes under some specific conditions
On 30.10.23 23:55, Byungchul Park wrote:
> On Mon, Oct 30, 2023 at 10:55:07AM -0700, Dave Hansen wrote:
>> On 10/30/23 00:25, Byungchul Park wrote:
>>> I'm suggesting a mechanism to reduce TLB flushes by keeping source and
>>> destination of folios participated in the migrations until all TLB
>>> flushes required are done, only if those folios are not mapped with
>>> write permission PTE entries at all. I worked Based on v6.6-rc5.
>>
>> There's a lot of common overhead here, on top of the complexity in general:
>>
>> * A new page flag
>> * A new cpumask_t in task_struct
>> * A new zone list
>> * Extra (temporary) memory consumption
>>
>> and the benefits are ... "performance improved a little bit" on one
>> workload. That doesn't seem like a good overall tradeoff to me.
>>
>> There will certainly be workloads that, before this patch, would have
>> little or no memory pressure and after this patch would need to do reclaim.
>
> 'if (gain - cost) > 0 ?'" is a difficult problem. I think the followings
> are already big benefit in general:
>
> 1. big reduction of IPIs #
> 2. big reduction of TLB flushes #
> 3. big reduction of TLB misses #
>
> Of course, I or we need to keep trying to see a better number in
> end-to-end performance.
You'll have to show convincing, real numbers, for use cases people care
about, to even motivate why people should consider looking at this in
more detail.
If you can't measure it and only speculate, nobody cares.
The numbers you provided were so far not convincing, and it's
questionable if the single benchmark you are presenting represents a
reasonable real workload that ends up improving *real* workloads. A
better description of the whole benchmark and why it represents a real
workload behavior might help.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists