[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8cba6a556233ae4e8cb401cb4ffa56b9d809e337.camel@infradead.org>
Date: Thu, 14 Dec 2023 14:08:05 +0000
From: David Woodhouse <dwmw2@...radead.org>
To: Paul Durrant <paul@....org>, Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Shuah Khan <shuah@...nel.org>,
kvm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH v10 18/19] KVM: pfncache: check the need for
invalidation under read lock first
On Mon, 2023-12-04 at 14:43 +0000, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@...zon.com>
>
> Taking a write lock on a pfncache will be disruptive if the cache is
> heavily used (which only requires a read lock). Hence, in the MMU notifier
> callback, take read locks on caches to check for a match; only taking a
> write lock to actually perform an invalidation (after a another check).
>
> Signed-off-by: Paul Durrant <pdurrant@...zon.com>
Reviewed-by: David Woodhouse <dwmw@...zon.co.uk>
In particular, the previous 'don't block on pfncache locks in
kvm_xen_set_evtchn_fast()' patch in this series is easy to justify on
the basis that it only falls back to the slow path if it can't take a
read lock immediately. And surely it should *always* be able to take a
read lock immediately unless there's an actual *writer* — which should
be a rare event, and means the cache was probably going to be
invalidates anyway.
But then we realised the MMU notifier was going to disrupt that.
> ---
> Cc: Sean Christopherson <seanjc@...gle.com>
> Cc: Paolo Bonzini <pbonzini@...hat.com>
> Cc: David Woodhouse <dwmw2@...radead.org>
>
> v10:
> - New in this version.
> ---
> virt/kvm/pfncache.c | 22 +++++++++++++++++++---
> 1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
> index c2a2d1e145b6..4da16d494f4b 100644
> --- a/virt/kvm/pfncache.c
> +++ b/virt/kvm/pfncache.c
> @@ -29,14 +29,30 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
>
> spin_lock(&kvm->gpc_lock);
> list_for_each_entry(gpc, &kvm->gpc_list, list) {
> - write_lock_irq(&gpc->lock);
> + read_lock_irq(&gpc->lock);
>
> /* Only a single page so no need to care about length */
> if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
> gpc->uhva >= start && gpc->uhva < end) {
> - gpc->valid = false;
> + read_unlock_irq(&gpc->lock);
> +
> + /*
> + * There is a small window here where the cache could
> + * be modified, and invalidation would no longer be
> + * necessary. Hence check again whether invalidation
> + * is still necessary once the write lock has been
> + * acquired.
> + */
> +
> + write_lock_irq(&gpc->lock);
> + if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) &&
> + gpc->uhva >= start && gpc->uhva < end)
> + gpc->valid = false;
> + write_unlock_irq(&gpc->lock);
> + continue;
> }
> - write_unlock_irq(&gpc->lock);
> +
> + read_unlock_irq(&gpc->lock);
> }
> spin_unlock(&kvm->gpc_lock);
> }
Download attachment "smime.p7s" of type "application/pkcs7-signature" (5965 bytes)
Powered by blists - more mailing lists