[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bbd59a2c0897d8ca642ea8c4787b829190e75a4d.camel@infradead.org>
Date: Tue, 06 Feb 2024 20:27:37 -0800
From: David Woodhouse <dwmw2@...radead.org>
To: Sean Christopherson <seanjc@...gle.com>, Paul Durrant <paul@....org>
Cc: Paolo Bonzini <pbonzini@...hat.com>, Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, Dave Hansen <dave.hansen@...ux.intel.com>,
x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>, Shuah Khan
<shuah@...nel.org>, kvm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v12 18/20] KVM: pfncache: check the need for
invalidation under read lock first
On Tue, 2024-02-06 at 20:22 -0800, Sean Christopherson wrote:
> On Mon, Jan 15, 2024, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@...zon.com>
> >
> > Taking a write lock on a pfncache will be disruptive if the cache is
>
> *Unnecessarily* taking a write lock.
No. Taking a write lock will be disrupting.
Unnecessarily taking a write lock will be unnecessarily disrupting.
Taking a write lock on a Thursday will be disrupting on a Thursday.
But the key is that if the cache is heavily used, the user gets
disrupted.
> Please save readers a bit of brain power
> and explain that this is beneificial when there are _unrelated_ invalidation.
I don't understand what you're saying there. Paul's sentence did have
an implicit "...so do that less then", but that didn't take much brain
power to infer.
> > heavily used (which only requires a read lock). Hence, in the MMU notifier
> > callback, take read locks on caches to check for a match; only taking a
> > write lock to actually perform an invalidation (after a another check).
>
> This doesn't have any dependency on this series, does it? I.e. this should be
> posted separately, and preferably with some performance data. Not having data
> isn't a sticking point, but it would be nice to verify that this isn't a
> pointless optimization.
No fundamental dependency, no. But it was triggered by the previous
patch, which makes kvm_xen_set_evtchn_fast() use read_trylock() and
makes it take the slow path when there's contention. It lives here just
fine as part of the series.
Download attachment "smime.p7s" of type "application/pkcs7-signature" (5965 bytes)
Powered by blists - more mailing lists