[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmlOyISNFbrztPky@google.com>
Date: Wed, 27 Apr 2022 14:10:16 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, David Woodhouse <dwmw@...zon.co.uk>,
Mingwei Zhang <mizhang@...gle.com>,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: Re: [PATCH v2 6/8] KVM: Fix multiple races in gfn=>pfn cache refresh
On Wed, Apr 27, 2022, Sean Christopherson wrote:
> Finally, the refresh logic doesn't protect against concurrent refreshes
> with different GPAs (which may or may not be a desired use case, but its
> allowed in the code), nor does it protect against a false negative on the
> memslot generation. If the first refresh sees a stale memslot generation,
> it will refresh the hva and generation before moving on to the hva=>pfn
> translation. If it then drops gpc->lock, a different user can come along,
> acquire gpc->lock, see that the memslot generation is fresh, and skip
> the hva=>pfn update due to the userspace address also matching (because
> it too was updated). Address this race by adding an "in-progress" flag
> so that the refresh that acquires gpc->lock first runs to completion
> before other users can start their refresh.
...
> @@ -159,10 +249,23 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
>
> write_lock_irq(&gpc->lock);
>
> + /*
> + * If another task is refreshing the cache, wait for it to complete.
> + * There is no guarantee that concurrent refreshes will see the same
> + * gpa, memslots generation, etc..., so they must be fully serialized.
> + */
> + while (gpc->refresh_in_progress) {
> + write_unlock_irq(&gpc->lock);
> +
> + cond_resched();
> +
> + write_lock_irq(&gpc->lock);
> + }
> + gpc->refresh_in_progress = true;
Adding refresh_in_progress can likely go in a separate patch. I'll plan on doing
that in a v3 unless it proves to be painful.
> @@ -246,9 +296,26 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,
> }
>
> out:
> + /*
> + * Invalidate the cache and purge the pfn/khva if the refresh failed.
> + * Some/all of the uhva, gpa, and memslot generation info may still be
> + * valid, leave it as is.
> + */
> + if (ret) {
> + gpc->valid = false;
> + gpc->pfn = KVM_PFN_ERR_FAULT;
> + gpc->khva = NULL;
> + }
> +
> + gpc->refresh_in_progress = false;
Powered by blists - more mailing lists