lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZcOdZKmmYz3kMgwp@google.com>
Date: Wed, 7 Feb 2024 07:10:28 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: David Woodhouse <dwmw2@...radead.org>
Cc: Paul Durrant <paul@....org>, Paolo Bonzini <pbonzini@...hat.com>, Jonathan Corbet <corbet@....net>, 
	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, 
	Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org, 
	"H. Peter Anvin" <hpa@...or.com>, Shuah Khan <shuah@...nel.org>, kvm@...r.kernel.org, 
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v12 18/20] KVM: pfncache: check the need for invalidation
 under read lock first

On Tue, Feb 06, 2024, David Woodhouse wrote:
> On Tue, 2024-02-06 at 20:47 -0800, Sean Christopherson wrote:
> > 
> > I'm saying this:
> > 
> >   When processing mmu_notifier invalidations for gpc caches, pre-check for
> >   overlap with the invalidation event while holding gpc->lock for read, and
> >   only take gpc->lock for write if the cache needs to be invalidated.  Doing
> >   a pre-check without taking gpc->lock for write avoids unnecessarily
> >   contending the lock for unrelated invalidations, which is very beneficial
> >   for caches that are heavily used (but rarely subjected to mmu_notifier
> >   invalidations).
> > 
> > is much friendlier to readers than this:
> > 
> >   Taking a write lock on a pfncache will be disruptive if the cache is
> >   heavily used (which only requires a read lock). Hence, in the MMU notifier
> >   callback, take read locks on caches to check for a match; only taking a
> >   write lock to actually perform an invalidation (after a another check).
> 
> That's a somewhat subjective observation. I actually find the latter to
> be far more succinct and obvious.
> 
> Actually... maybe I find yours harder because it isn't actually stating
> the situation as I understand it. You said "unrelated invalidation" in
> your first email, and "overlap with the invalidation event" in this
> one... neither of which makes sense to me because there is no *other*
> invalidation here.

I am referring to the "mmu_notifier invalidation event".  While a particular GPC
may not be affected by the invalidation, it's entirely possible that a different
GPC and/or some chunk of guest memory does need to be invalidated/zapped.

> We're only talking about the MMU notifier gratuitously taking the write

It's not "the MMU notifier" though, it's KVM that unnecessarily takes a lock.  I
know I'm being somewhat pedantic, but the distinction does matter.  E.g. with
guest_memfd, there will be invalidations that get routed through this code, but
that do not originate in the mmu_notifier.

And I think it's important to make it clear to readers that an mmu_notifier really
just is a notification from the primary MMU, albeit a notification that comes with
a rather strict contract.

> lock on a GPC that it *isn't* going to invalidate (the common case),
> and that disrupting users which are trying to take the read lock on
> that GPC.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ