lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZiujbLOg_HYSd4iUYuOhjK5mHkVrhcgk6wePDd8dfCvA@mail.gmail.com>
Date: Thu, 9 Jan 2025 15:26:35 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Cooper <andrew.cooper3@...rix.com>
Cc: akpm@...ux-foundation.org, bp@...en8.de, dave.hansen@...ux.intel.com, 
	hpa@...or.com, jackmanb@...gle.com, kernel-team@...a.com, 
	linux-kernel@...r.kernel.org, linux-mm@...ck.org, luto@...nel.org, 
	mingo@...hat.com, nadav.amit@...il.com, peterz@...radead.org, 
	reijiw@...gle.com, riel@...riel.com, tglx@...utronix.de, x86@...nel.org, 
	zhengqi.arch@...edance.com
Subject: Re: [PATCH v3 00/12] AMD broadcast TLB invalidation

On Thu, Jan 9, 2025 at 3:00 PM Andrew Cooper <andrew.cooper3@...rix.com> wrote:
>
> On 09/01/2025 9:32 pm, Yosry Ahmed wrote:
> > On Wed, Jan 8, 2025 at 6:47 PM Andrew Cooper <andrew.cooper3@...rix.com> wrote:
> >>>> I suspect AMD wouldn't tell us exactly ;)
> >>> Well, ideally they would just tell us the conditions under which CPUs
> >>> respond to the broadcast TLB flush or the expectations around latency.
> >> [Resend, complete this time]
> >>
> >> Disclaimer.  I'm not at AMD; I don't know how they implement it; I'm
> >> just a random person on the internet.  But, here are a few things that
> >> might be relevant to know.
> >>
> >> AMD's SEV-SNP whitepaper [1] states that RMP permissions "are cached in
> >> the CPU TLB and related structures" and also "When required, hardware
> >> automatically performs TLB invalidations to ensure that all processors
> >> in the system see the updated RMP entry information."
> >>
> >> That sentence doesn't use "broadcast" or "remote", but "all processors"
> >> is a pretty clear clue.  Broadcast TLB invalidations are a building
> >> block of all the RMP-manipulation instructions.
> >>
> >> Furthermore, to be useful in this context, they need to be ordered with
> >> memory.  Specifically, a new pagewalk mustn't start after an
> >> invalidation, yet observe the stale RMP entry.
> >>
> >>
> >> x86 CPUs do have reasonable forward-progress guarantees, but in order to
> >> achieve forward progress, they need to e.g. guarantee that one memory
> >> access doesn't displace the TLB entry backing a different memory access
> >> from the same instruction, or you could livelock while trying to
> >> complete a single instruction.
> >>
> >> A consequence is that you can't safely invalidate a TLB entry of an
> >> in-progress instruction (although this means only the oldest instruction
> >> in the pipeline, because everything else is speculative and potentially
> >> transient).
> >>
> >>
> >> INVLPGB invalidations are interrupt-like from the point of view of the
> >> remote core, but are microarchitectural and can be taken irrespective of
> >> the architectural Interrupt and Global Interrupt Flags.  As a
> >> consequence, they'll need wait until an instruction boundary to be
> >> processed.  While not AMD, the Intel RAR whitepaper [2] discusses the
> >> handling of RARs on the remote processor, and they share a number of
> >> constraints in common with INVLPGB.
> >>
> >>
> >> Overall, I'd expect the INVLPGB instructions to be pretty quick in and
> >> of themselves; interestingly, they're not identified as architecturally
> >> serialising.  The broadcast is probably posted, and will be dealt with
> >> by remote processors on the subsequent instruction boundary.  TLBSYNC is
> >> the barrier to wait until the invalidations have been processed, and
> >> this will block for an unspecified length of time, probably bounded by
> >> the "longest" instruction in progress on a remote CPU.  e.g. I expect it
> >> probably will suck if you have to wait for a WBINVD instruction to
> >> complete on a remote CPU.
> >>
> >> That said, architectural IPIs have the same conditions too, except on
> >> top of that you've got to run a whole interrupt handler.  So, with
> >> reasonable confidence, however slow TLBSYNC might be in the worst case,
> >> it's got absolutely nothing on the overhead of doing invalidations the
> >> old fashioned way.
> > Generally speaking, I am not arguing that TLB flush IPIs are worse
> > than INLPGB/TLBSYNC, I think we should expect the latter to perform
> > better in most cases.
> >
> > But there is a difference here because the processor executing TLBSYNC
> > cannot serve interrupts or NMIs while waiting for remote CPUs, because
> > they have to be served at an instruction boundary, right?
>
> That's as per the architecture, yes.  NMIs do have to be served on
> instruction boundaries.  An NMI that becomes pending while a TLBSYNC is
> in progress will have to wait until the TLBSYNC completes.
>
> (Probably.  REP string instructions and AVX scatter/gather have explicit
> behaviours that them them be interrupted, and to continue from where
> they left off when the interrupt handler returns.  Depending on how
> TLBSYNC is implemented, it's just possible it has this property too.)

That would be great actually, if that's the case all my concerns go away.

>
> > Unless
> > TLBSYNC is an exception to that rule, or its execution is considered
> > completed before remote CPUs respond (i.e. the CPU executes it quickly
> > then enters into a wait doing "nothing").
> >
> > There are also intriguing corner cases that are not documented. For
> > example, you mention that it's reasonable to expect that a remote CPU
> > does not serve TLBSYNC except at the instruction boundary.
>
> INVLPGB needs to wait for an instruction boundary in order to be processed.
>
> All TLBSYNC needs to do is wait until it's certain that all the prior
> INVLPGBs issued by this CPU have been serviced.
>
> >  What if
> > that CPU is executing TLBSYNC? Do we have to wait for its execution to
> > complete? Is it possible to end up in a deadlock? This goes back to my
> > previous point about whether TLBSYNC is a special case or when it's
> > considered to have finished executing.
>
> Remember that the SEV-SNP instruction (PSMASH, PVALIDATE,
> RMP{ADJUST,UPDATE,QUERY,READ}) have an INVLPGB/TLBSYNC pair under the
> hood.  You can execute these instructions on different CPUs in parallel.
>
> It's certainly possible AMD missed something and there's and there's a
> deadlock case in there.  But Google do offer SEV-SNP VMs and have the
> data and scale to know whether such a deadlock is happening in practice.

I am not familiar with SEV-SNP so excuse my ignorance. I am also
pretty sure that the percentage of SEV-SNP workloads is very low
compared to the workloads that would start using INVLPGB/TLBSYNC after
this series. So if there's a dormant bug or a rare scenario where the
TLBSYNC latency is massive, it may very well be newly uncovered now.

>
> >
> > I am sure people thought about that and I am probably worried over
> > nothing, but there's little details here so one has to speculate.
> >
> > Again, sorry if I am making a fuss over nothing and it's all in my head.
>
> It's absolutely a valid question to ask.
>
> But x86 is full of longer delays than this.  The GIF for example can
> block NMIs until the hypervisor is complete with the world switch, and
> it's left as an exercise to software not to abuse this.  Taking an SMI
> will be orders of magnitude more expensive than anything discussed here.

Right. What is happening here just seems like something that happens
more frequently and therefore is more likely to run into cases with
absurd delays.

It would be great if someone from AMD could shed some light on what is
to be reasonably expected from TLBSYNC here.

Anyway, thanks a lot for all your (very informative) responses :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ