lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZkSB_95t2PMF1Qb4+jmCzLV7CvE6LL1YQqHwzCSsznwg@mail.gmail.com>
Date: Mon, 6 Jan 2025 14:49:02 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Rik van Riel <riel@...riel.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org, kernel-team@...a.com, 
	dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org, 
	tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com, 
	akpm@...ux-foundation.org, nadav.amit@...il.com, zhengqi.arch@...edance.com, 
	linux-mm@...ck.org, Reiji Watanabe <reijiw@...gle.com>, 
	Brendan Jackman <jackmanb@...gle.com>
Subject: Re: [PATCH v3 00/12] AMD broadcast TLB invalidation

On Mon, Dec 30, 2024 at 9:57 AM Rik van Riel <riel@...riel.com> wrote:
>
> Subject: [RFC PATCH 00/10] AMD broadcast TLB invalidation
>
> Add support for broadcast TLB invalidation using AMD's INVLPGB instruction.
>
> This allows the kernel to invalidate TLB entries on remote CPUs without
> needing to send IPIs, without having to wait for remote CPUs to handle
> those interrupts, and with less interruption to what was running on
> those CPUs.
>
> Because x86 PCID space is limited, and there are some very large
> systems out there, broadcast TLB invalidation is only used for
> processes that are active on 3 or more CPUs, with the threshold
> being gradually increased the more the PCID space gets exhausted.
>
> Combined with the removal of unnecessary lru_add_drain calls
> (see https://lkml.org/lkml/2024/12/19/1388) this results in a
> nice performance boost for the will-it-scale tlb_flush2_threads
> test on an AMD Milan system with 36 cores:
>
> - vanilla kernel:           527k loops/second
> - lru_add_drain removal:    731k loops/second
> - only INVLPGB:             527k loops/second
> - lru_add_drain + INVLPGB: 1157k loops/second
>
> Profiling with only the INVLPGB changes showed while
> TLB invalidation went down from 40% of the total CPU
> time to only around 4% of CPU time, the contention
> simply moved to the LRU lock.

We briefly looked at using INVLPGB/TLBSYNC as part of the ASI work to
optimize away the async freeing logic which sends TLB flush IPIs.

I have a high-level question about INVLPGB/TLBSYNC that I could not
immediately find the answer to in the AMD manual. Sorry if I missed
the answer or if I missed something obvious.

Do we know what the underlying mechanism for delivering the TLB
flushes is? If a CPU has interrupts disabled, does it still receive
the broadcast TLB flush request and handle it?

My main concern is that TLBSYNC is a single instruction that seems
like it will wait for an arbitrary amount of time, and IIUC interrupts
(and NMIs) will not be delivered to the running CPU until after the
instruction completes execution (only at an instruction boundary).

Are there any guarantees about other CPUs handling the broadcast TLB
flush in a timely manner, or an explanation of how CPUs handle the
incoming requests in general?

>
> Fixing both at the same time about doubles the
> number of iterations per second from this case.
>
> v3:
>  - Remove paravirt tlb_remove_table call (thank you Qi Zheng)
>  - More suggested cleanups and changelog fixes by Peter and Nadav
> v2:
>  - Apply suggestions by Peter and Borislav (thank you!)
>  - Fix bug in arch_tlbbatch_flush, where we need to do both
>    the TLBSYNC, and flush the CPUs that are in the cpumask.
>  - Some updates to comments and changelogs based on questions.
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ