[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yoe5fkBzmnABpn2G@google.com>
Date: Fri, 20 May 2022 15:53:34 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Lai Jiangshan <jiangshanlai@...il.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Mingwei Zhang <mizhang@...gle.com>
Subject: Re: [PATCH v3 6/8] KVM: Fully serialize gfn=>pfn cache refresh via
mutex
On Fri, May 20, 2022, Paolo Bonzini wrote:
> On 4/29/22 23:00, Sean Christopherson wrote:
> > diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
> > index 05cb0bcbf662..eaef31462bbe 100644
> > --- a/virt/kvm/pfncache.c
> > +++ b/virt/kvm/pfncache.c
> > @@ -157,6 +157,13 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
> > if (page_offset + len > PAGE_SIZE)
> > return -EINVAL;
> > + /*
> > + * If another task is refreshing the cache, wait for it to complete.
> > + * There is no guarantee that concurrent refreshes will see the same
> > + * gpa, memslots generation, etc..., so they must be fully serialized.
> > + */
> > + mutex_lock(&gpc->refresh_lock);
> > +
> > write_lock_irq(&gpc->lock);
> > old_pfn = gpc->pfn;
> > @@ -248,6 +255,8 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
> > out:
> > write_unlock_irq(&gpc->lock);
> > + mutex_unlock(&gpc->refresh_lock);
> > +
> > gpc_release_pfn_and_khva(kvm, old_pfn, old_khva);
> > return ret;
>
> Does kvm_gfn_to_pfn_cache_unmap also need to take the mutex, to avoid the
> WARN_ON(gpc->valid)?
I don't know What WARN_ON() you're referring to, but there is a double-free bug
if unmap() runs during an invalidation. That can be solved without having to
take the mutex though, just reset valid/pfn/khva before the retry.
When searching to see how unmap() was used in the original series (there's no
other user besides destroy...), I stumbled across this likely-related syzbot bug
that unfortunately didn't Cc KVM :-(
https://lore.kernel.org/all/00000000000073f09205db439577@google.com
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 72eee096a7cd..1719b0249dbc 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -228,6 +228,11 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
if (!old_valid || old_uhva != gpc->uhva) {
void *new_khva = NULL;
+ /* blah blah blah */
+ gpc->valid = false;
+ gpc->pfn = KVM_PFN_ERR_FAULT;
+ gpc->khva = NULL;
+
new_pfn = hva_to_pfn_retry(kvm, gpc);
if (is_error_noslot_pfn(new_pfn)) {
ret = -EFAULT;
@@ -251,11 +256,7 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
/* Nothing more to do, the pfn is consumed only by the guest. */
}
- if (ret) {
- gpc->valid = false;
- gpc->pfn = KVM_PFN_ERR_FAULT;
- gpc->khva = NULL;
- } else {
+ if (!ret) {
gpc->valid = true;
gpc->pfn = new_pfn;
gpc->khva = new_khva;
@@ -283,6 +284,11 @@ void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
write_lock_irq(&gpc->lock);
+ if (!gpc->valid) {
+ write_unlock_irq(&gpc->lock);
+ return;
+ }
+
gpc->valid = false;
old_khva = gpc->khva - offset_in_page(gpc->khva);
Powered by blists - more mailing lists