[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <426011a9-1fbc-415c-bac7-df5d67417df3@intel.com>
Date: Thu, 9 Jan 2025 13:18:57 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Rik van Riel <riel@...riel.com>, x86@...nel.org
Cc: linux-kernel@...r.kernel.org, kernel-team@...a.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com,
akpm@...ux-foundation.org, nadav.amit@...il.com, zhengqi.arch@...edance.com,
linux-mm@...ck.org
Subject: Re: [PATCH 06/12] x86/mm: use INVLPGB for kernel TLB flushes
On 1/9/25 12:16, Rik van Riel wrote:
> On Mon, 2025-01-06 at 09:21 -0800, Dave Hansen wrote:
>> On 12/30/24 09:53, Rik van Riel wrote:
>>
>>
>>> +static void broadcast_kernel_range_flush(unsigned long start,
>>> unsigned long end)
>>> +{
>>> + unsigned long addr;
>>> + unsigned long maxnr = invlpgb_count_max;
>>> + unsigned long threshold = tlb_single_page_flush_ceiling *
>>> maxnr;
>>
>> The 'tlb_single_page_flush_ceiling' value was determined by
>> looking at _local_ invalidation cost. Could you talk a bit about
>> why it's also a good value to use for remote invalidations? Does it
>> hold up for INVLPGB the same way it did for good ol' INVLPG? Has
>> there been any explicit testing here to find a good value?
>>
>> I'm also confused by the multiplication here. Let's say
>> invlpgb_count_max==20 and tlb_single_page_flush_ceiling==30.
>>
>> You would need to switch away from single-address invalidation
>> when the number of addresses is >20 for INVLPGB functional reasons.
>> But you'd also need to switch away when >30 for performance
>> reasons (tlb_single_page_flush_ceiling).
>>
>> But I don't understand how that would make the threshold 20*30=600
>> invalidations.
>
> I have not done any measurement to see how
> flushing with INVLPGB stacks up versus
> local TLB flushes.
>
> What makes INVLPGB potentially slower:
> - These flushes are done globally
>
> What makes INVLPGB potentially faster:
> - Multiple flushes can be pending simultaneously,
> and executed in any convenient order by the CPUs.
> - Wait once on completion of all the queued flushes.
>
> Another thing that makes things interesting is theĀ
> TLB entry coalescing done by AMD CPUs.
>
> When multiple pages are both virtually and physically
> contiguous in memory (which is fairly common), the
> CPU can use a single TLB entry to map up to 8 of them.
>
> That means if we issue eg. 20 INVLPGB flushes for
> 8 4kB pages each, instead of the CPUs needing to
> remove 160 TLB entries, there might only be 50.
I honestly don't expect there to be any real difference in INVLPGB
execution on the sender side based on what the receivers have in their TLB.
> I just guessed at the numbers used in my code,
> while trying to sort out the details elsewhere
> in the code.
>
> How should we go about measuring the tradeoffs
> between invalidation time, and the time spent
> in TLB misses from flushing unnecessary stuff?
Well, we did a bunch of benchmarks for INVLPG. We could dig that back up
and repeat some of it.
But actually I think INVLPGB is *WAY* better than INVLPG here. INVLPG
doesn't have ranged invalidation. It will only architecturally
invalidate multiple 4K entries when the hardware fractured them in the
first place. I think we should probably take advantage of what INVLPGB
can do instead of following the INVLPG approach.
INVLPGB will invalidate a range no matter where the underlying entries
came from. Its "increment the virtual address at the 2M boundary" mode
will invalidate entries of any size. That's my reading of the docs at
least. Is that everyone else's reading too?
So, let's pick a number "Z" which is >= invlpgb_count_max. Z could
arguably be set to tlb_single_page_flush_ceiling. Then do this:
4k -> Z*4k => use 4k step
>Z*4k -> Z*2M => use 2M step
>Z*2M => invalidate everything
Invalidations <=Z*4k are exact. They never zap extra TLB entries.
Invalidations that use the 2M step *might* unnecessarily zap some extra
4k mappings in the last 2M, but this is *WAY* better than invalidating
everything.
"Invalidate everything" obviously stinks, but it should only be for
pretty darn big invalidations. This approach can also do a true ranged
INVLPGB for many more cases than the existing proposal. The only issue
would be if the 2M step is substantially more expensive than the 4k step.
...
>> I also wonder if this would all get simpler if we give in and
>> *always* call get_flush_tlb_info(). That would provide a nice
>> single place to consolidate the "all vs. ranged" flush logic.
>
> Possibly. That might be a good way to unify that threshold check?
>
> That should probably be a separate patch, though.
Yes, it should be part of refactoring that comes before the INVLPGB
enabling.
Powered by blists - more mailing lists