[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b8d0c8c-d685-627b-676c-01c3d194fc82@amd.com>
Date: Fri, 20 Mar 2020 15:37:23 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: David Rientjes <rientjes@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH] KVM: SVM: Issue WBINVD after deactivating an SEV guest
On 3/20/20 3:34 PM, David Rientjes wrote:
> On Fri, 20 Mar 2020, Tom Lendacky wrote:
>
>> Currently, CLFLUSH is used to flush SEV guest memory before the guest is
>> terminated (or a memory hotplug region is removed). However, CLFLUSH is
>> not enough to ensure that SEV guest tagged data is flushed from the cache.
>>
>> With 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations"), the
>> original WBINVD was removed. This then exposed crashes at random times
>> because of a cache flush race with a page that had both a hypervisor and
>> a guest tag in the cache.
>>
>> Restore the WBINVD when destroying an SEV guest and add a WBINVD to the
>> svm_unregister_enc_region() function to ensure hotplug memory is flushed
>> when removed. The DF_FLUSH can still be avoided at this point.
>>
>> Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
>> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
>
> Acked-by: David Rientjes <rientjes@...gle.com>
>
> Should this be marked for stable?
The Fixes tag should take care of that.
Thanks,
Tom
>
> Cc: stable@...r.kernel.org # 5.5+
>
>> ---
>> arch/x86/kvm/svm.c | 22 ++++++++++++++--------
>> 1 file changed, 14 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index 08568ae9f7a1..d54cdca9c140 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -1980,14 +1980,6 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
>> static void __unregister_enc_region_locked(struct kvm *kvm,
>> struct enc_region *region)
>> {
>> - /*
>> - * The guest may change the memory encryption attribute from C=0 -> C=1
>> - * or vice versa for this memory range. Lets make sure caches are
>> - * flushed to ensure that guest data gets written into memory with
>> - * correct C-bit.
>> - */
>> - sev_clflush_pages(region->pages, region->npages);
>> -
>> sev_unpin_memory(kvm, region->pages, region->npages);
>> list_del(®ion->list);
>> kfree(region);
>> @@ -2004,6 +1996,13 @@ static void sev_vm_destroy(struct kvm *kvm)
>>
>> mutex_lock(&kvm->lock);
>>
>> + /*
>> + * Ensure that all guest tagged cache entries are flushed before
>> + * releasing the pages back to the system for use. CLFLUSH will
>> + * not do this, so issue a WBINVD.
>> + */
>> + wbinvd_on_all_cpus();
>> +
>> /*
>> * if userspace was terminated before unregistering the memory regions
>> * then lets unpin all the registered memory.
>> @@ -7247,6 +7246,13 @@ static int svm_unregister_enc_region(struct kvm *kvm,
>> goto failed;
>> }
>>
>> + /*
>> + * Ensure that all guest tagged cache entries are flushed before
>> + * releasing the pages back to the system for use. CLFLUSH will
>> + * not do this, so issue a WBINVD.
>> + */
>> + wbinvd_on_all_cpus();
>> +
>> __unregister_enc_region_locked(kvm, region);
>>
>> mutex_unlock(&kvm->lock);
>> --
>> 2.17.1
>>
>>
Powered by blists - more mailing lists