lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z7nDJQanWxv5cC8d@gmail.com>
Date: Sat, 22 Feb 2025 13:29:25 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Manali Shukla <manali.shukla@....com>
Cc: Rik van Riel <riel@...riel.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org, bp@...en8.de, peterz@...radead.org,
	dave.hansen@...ux.intel.com, zhengqi.arch@...edance.com,
	nadav.amit@...il.com, thomas.lendacky@....com, kernel-team@...a.com,
	linux-mm@...ck.org, akpm@...ux-foundation.org, jannh@...gle.com,
	mhklinux@...look.com, andrew.cooper3@...rix.com
Subject: Re: [PATCH v7 00/12] AMD broadcast TLB invalidation


* Manali Shukla <manali.shukla@....com> wrote:

> On 1/23/2025 9:53 AM, Rik van Riel wrote:
> > Add support for broadcast TLB invalidation using AMD's INVLPGB instruction.
> > 
> > This allows the kernel to invalidate TLB entries on remote CPUs without
> > needing to send IPIs, without having to wait for remote CPUs to handle
> > those interrupts, and with less interruption to what was running on
> > those CPUs.
> > 
> > Because x86 PCID space is limited, and there are some very large
> > systems out there, broadcast TLB invalidation is only used for
> > processes that are active on 3 or more CPUs, with the threshold
> > being gradually increased the more the PCID space gets exhausted.
> > 
> > Combined with the removal of unnecessary lru_add_drain calls
> > (see https://lkml.org/lkml/2024/12/19/1388) this results in a
> > nice performance boost for the will-it-scale tlb_flush2_threads
> > test on an AMD Milan system with 36 cores:
> > 
> > - vanilla kernel:           527k loops/second
> > - lru_add_drain removal:    731k loops/second
> > - only INVLPGB:             527k loops/second
> > - lru_add_drain + INVLPGB: 1157k loops/second
> > 
> > Profiling with only the INVLPGB changes showed while
> > TLB invalidation went down from 40% of the total CPU
> > time to only around 4% of CPU time, the contention
> > simply moved to the LRU lock.
> > 
> > Fixing both at the same time about doubles the
> > number of iterations per second from this case.
> > 
> > Some numbers closer to real world performance
> > can be found at Phoronix, thanks to Michael:
> > 
> > https://www.phoronix.com/news/AMD-INVLPGB-Linux-Benefits
> > 
> > My current plan is to implement support for Intel's RAR
> > (Remote Action Request) TLB flushing in a follow-up series,
> > after this thing has been merged into -tip. Making things
> > any larger would just be unwieldy for reviewers.
> > 
> > v7:
> >  - a few small code cleanups (Nadav)
> >  - fix spurious VM_WARN_ON_ONCE in mm_global_asid
> >  - code simplifications & better barriers (Peter & Dave)
> > v6:
> >  - fix info->end check in flush_tlb_kernel_range (Michael)
> >  - disable broadcast TLB flushing on 32 bit x86
> > v5:
> >  - use byte assembly for compatibility with older toolchains (Borislav, Michael)
> >  - ensure a panic on an invalid number of extra pages (Dave, Tom)
> >  - add cant_migrate() assertion to tlbsync (Jann)
> >  - a bunch more cleanups (Nadav)
> >  - key TCE enabling off X86_FEATURE_TCE (Andrew)
> >  - fix a race between reclaim and ASID transition (Jann)
> > v4:
> >  - Use only bitmaps to track free global ASIDs (Nadav)
> >  - Improved AMD initialization (Borislav & Tom)
> >  - Various naming and documentation improvements (Peter, Nadav, Tom, Dave)
> >  - Fixes for subtle race conditions (Jann)
> > v3:
> >  - Remove paravirt tlb_remove_table call (thank you Qi Zheng)
> >  - More suggested cleanups and changelog fixes by Peter and Nadav
> > v2:
> >  - Apply suggestions by Peter and Borislav (thank you!)
> >  - Fix bug in arch_tlbbatch_flush, where we need to do both
> >    the TLBSYNC, and flush the CPUs that are in the cpumask.
> >  - Some updates to comments and changelogs based on questions.
> > 
> > 
> 
> I have collected performance data using the will-it-scale
> tlb_flush2_threads benchmark on my AMD Milan, Genoa, and Turin systems.
> 
> As expected, I don't see any discrepancies in the data.
> (Performance Testing is done based on 6.13.0-rc7).
> 
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | ./tlb_flush2_threads -s 5 -t 128 | Milan 1P (NPS1) | Milan 1P (NPS2) | Genoa 1P (NPS1) | Genoa 1P (NPS2) | Turin 2P (NPS1) | Turin 2P (NPS2) |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | Vanila                           |      357647     |      419631     |     319885      |      311069     |      380559     |      379286     |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | LRU drain removal                |      784734     |      796056     |     540862      |      530472     |      549168     |      482683     |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | INVLPGB                          |      581069     |      950848     |     501033      |      553987     |      528660     |      536535     |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | LRU drain removal + INVLPGB      |     1094941     |     1086826     |     980293      |      979005     |      1228823    |      1238440    |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | LRU drain vs. Vanila             |      54.42%     |     47.29%      |     40.86%      |      41.36%     |      30.70%     |      21.42%     |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | INVLPGB vs. Vanila               |      38.45%     |     55.87%      |     55.87%      |      43.85%     |      28.01%     |      29.31%     |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> | (LRU drain + INVLPGB) vs. Vanila |      67.34%     |     61.39%      |     67.37%      |      68.23%     |      69.03%     |      69.37%     |
> ------------------------------------------------------------------------------------------------------------------------------------------------
> 
> Feel free to add:
> Tested-by: Manali Shukla <Manali.Shukla@....com>

Great data!

Could we please add all the scalability testing results to patch #9 or 
so, so that it's preserved in the kernel Git history and provides a 
background as to why we want this feature?

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ