[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28a005b7-9ae3-fe0d-b003-9aedba27dc85@maciej.szmigiero.name>
Date: Tue, 18 Jan 2022 18:29:50 +0100
From: "Maciej S. Szmigiero" <mail@...iej.szmigiero.name>
To: Nikunj A Dadhania <nikunj@....com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>,
Tom Lendacky <thomas.lendacky@....com>,
Peter Gonda <pgonda@...gle.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH 6/6] KVM: SVM: Pin SEV pages in MMU during
sev_launch_update_data()
On 18.01.2022 16:00, Maciej S. Szmigiero wrote:
> Hi Nikunj,
>
> On 18.01.2022 12:06, Nikunj A Dadhania wrote:
>> From: Sean Christopherson <sean.j.christopherson@...el.com>
>>
>> Pin the memory for the data being passed to launch_update_data()
>> because it gets encrypted before the guest is first run and must
>> not be moved which would corrupt it.
>>
>> Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
>> [ * Changed hva_to_gva() to take an extra argument and return gpa_t.
>> * Updated sev_pin_memory_in_mmu() error handling.
>> * As pinning/unpining pages is handled within MMU, removed
>> {get,put}_user(). ]
>> Signed-off-by: Nikunj A Dadhania <nikunj@....com>
>> ---
>> arch/x86/kvm/svm/sev.c | 122 ++++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 119 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
>> index 14aeccfc500b..1ae714e83a3c 100644
>> --- a/arch/x86/kvm/svm/sev.c
>> +++ b/arch/x86/kvm/svm/sev.c
>> @@ -22,6 +22,7 @@
>> #include <asm/trapnr.h>
>> #include <asm/fpu/xcr.h>
>> +#include "mmu.h"
>> #include "x86.h"
>> #include "svm.h"
>> #include "svm_ops.h"
>> @@ -490,6 +491,110 @@ static unsigned long get_num_contig_pages(unsigned long idx,
>> return pages;
>> }
>> +#define SEV_PFERR_RO (PFERR_USER_MASK)
>> +#define SEV_PFERR_RW (PFERR_WRITE_MASK | PFERR_USER_MASK)
>> +
>> +static struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm,
>> + unsigned long hva)
>> +{
>> + struct kvm_memslots *slots = kvm_memslots(kvm);
>> + struct kvm_memory_slot *memslot;
>> + int bkt;
>> +
>> + kvm_for_each_memslot(memslot, bkt, slots) {
>> + if (hva >= memslot->userspace_addr &&
>> + hva < memslot->userspace_addr +
>> + (memslot->npages << PAGE_SHIFT))
>> + return memslot;
>> + }
>> +
>> + return NULL;
>> +}
>
> We have kvm_for_each_memslot_in_hva_range() now, please don't do a linear
> search through memslots.
> You might need to move the aforementioned macro from kvm_main.c to some
> header file, though.
Besides performance considerations I can't see the code here taking into
account the fact that a hva can map to multiple memslots (they an overlap
in the host address space).
Thanks,
Maciej
Powered by blists - more mailing lists