[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9b271d5a-28b5-bd7e-87d1-c1698aabe6fc@redhat.com>
Date: Wed, 17 Feb 2021 13:39:12 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Dov Murik <dovmurik@...ux.vnet.ibm.com>,
Peter Gonda <pgonda@...gle.com>, stable@...r.kernel.org
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Joerg Roedel <joro@...tes.org>,
Tom Lendacky <thomas.lendacky@....com>,
Brijesh Singh <brijesh.singh@....com>,
Sean Christopherson <seanjc@...gle.com>, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH for 5.4] Fix unsynchronized access to sev members through
svm_register_enc_region
On 17/02/21 10:18, Dov Murik wrote:
> Hi Peter,
>
> On 08/02/2021 18:48, Peter Gonda wrote:
>> commit 19a23da53932bc8011220bd8c410cb76012de004 upstream.
>>
>> Grab kvm->lock before pinning memory when registering an encrypted
>> region; sev_pin_memory() relies on kvm->lock being held to ensure
>> correctness when checking and updating the number of pinned pages.
>>
>> Add a lockdep assertion to help prevent future regressions.
>>
>> Cc: Thomas Gleixner <tglx@...utronix.de>
>> Cc: Ingo Molnar <mingo@...hat.com>
>> Cc: "H. Peter Anvin" <hpa@...or.com>
>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>> Cc: Joerg Roedel <joro@...tes.org>
>> Cc: Tom Lendacky <thomas.lendacky@....com>
>> Cc: Brijesh Singh <brijesh.singh@....com>
>> Cc: Sean Christopherson <seanjc@...gle.com>
>> Cc: x86@...nel.org
>> Cc: kvm@...r.kernel.org
>> Cc: stable@...r.kernel.org
>> Cc: linux-kernel@...r.kernel.org
>> Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
>> Signed-off-by: Peter Gonda <pgonda@...gle.com>
>>
>> V2
>> - Fix up patch description
>> - Correct file paths svm.c -> sev.c
>> - Add unlock of kvm->lock on sev_pin_memory error
>>
>> V1
>> - https://lore.kernel.org/kvm/20210126185431.1824530-1-pgonda@google.com/
>>
>> Message-Id: <20210127161524.2832400-1-pgonda@...gle.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>> ---
>> arch/x86/kvm/svm.c | 18 +++++++++++-------
>> 1 file changed, 11 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index 2b506904be02..93c89f1ffc5d 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -1830,6 +1830,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
>> struct page **pages;
>> unsigned long first, last;
>>
>> + lockdep_assert_held(&kvm->lock);
>> +
>> if (ulen == 0 || uaddr + ulen < uaddr)
>> return NULL;
>>
>> @@ -7086,12 +7088,21 @@ static int svm_register_enc_region(struct kvm *kvm,
>> if (!region)
>> return -ENOMEM;
>>
>> + mutex_lock(&kvm->lock);
>> region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages, 1);
>> if (!region->pages) {
>> ret = -ENOMEM;
>> + mutex_unlock(&kvm->lock);
>> goto e_free;
>> }
>>
>> + region->uaddr = range->addr;
>> + region->size = range->size;
>> +
>> + mutex_lock(&kvm->lock);
>
> This extra mutex_lock call doesn't appear in the upstream patch (committed
> as 19a23da5393), but does appear in the 5.4 and 4.19 backports. Is it
> needed here?
Ouch. No it isn't and it's an insta-deadlock. Let me send a fix.
Paolo
> -Dov
>
>
>> + list_add_tail(®ion->list, &sev->regions_list);
>> + mutex_unlock(&kvm->lock);
>> +
>> /*
>> * The guest may change the memory encryption attribute from C=0 -> C=1
>> * or vice versa for this memory range. Lets make sure caches are
>> @@ -7100,13 +7111,6 @@ static int svm_register_enc_region(struct kvm *kvm,
>> */
>> sev_clflush_pages(region->pages, region->npages);
>>
>> - region->uaddr = range->addr;
>> - region->size = range->size;
>> -
>> - mutex_lock(&kvm->lock);
>> - list_add_tail(®ion->list, &sev->regions_list);
>> - mutex_unlock(&kvm->lock);
>> -
>> return ret;
>>
>> e_free:
>>
>
Powered by blists - more mailing lists