[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ea459c0-7e9b-4274-a888-5f42a90aecc1@amd.com>
Date: Mon, 21 Oct 2024 15:23:12 -0500
From: "Pratik R. Sampat" <pratikrajesh.sampat@....com>
To: Sean Christopherson <seanjc@...gle.com>
CC: <kvm@...r.kernel.org>, <pbonzini@...hat.com>, <pgonda@...gle.com>,
<thomas.lendacky@....com>, <michael.roth@....com>, <shuah@...nel.org>,
<linux-kselftest@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 1/9] KVM: selftests: Decouple SEV ioctls from asserts
Hi Sean,
On 10/14/2024 5:18 PM, Sean Christopherson wrote:
> On Thu, Sep 05, 2024, Pratik R. Sampat wrote:
>> +static inline int __sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
>> + uint64_t hva, uint64_t size)
>> {
>> struct kvm_sev_launch_update_data update_data = {
>> - .uaddr = (unsigned long)addr_gpa2hva(vm, gpa),
>> + .uaddr = hva,
>> .len = size,
>> };
>>
>> - vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &update_data);
>> + return __vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &update_data);
>> +}
>> +
>> +static inline void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
>> + uint64_t hva, uint64_t size)
>> +{
>> + int ret = __sev_launch_update_data(vm, gpa, hva, size);
>> +
>> + TEST_ASSERT_VM_VCPU_IOCTL(!ret, KVM_SEV_LAUNCH_UPDATE_DATA, ret, vm);
>> }
>>
>> #endif /* SELFTEST_KVM_SEV_H */
>> diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
>> index e9535ee20b7f..125a72246e09 100644
>> --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c
>> +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
>> @@ -14,15 +14,16 @@
>> * and find the first range, but that's correct because the condition
>> * expression would cause us to quit the loop.
>> */
>> -static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region)
>> +static int encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region)
>
> This is all kinds of wrong. encrypt_region() should never fail. And by allowing
> it to fail, any unexpected failure becomes harder to debug. It's also a lie,
> because sev_register_encrypted_memory() isn't allowed to fail, and I would bet
> that most readers would expect _that_ call to fail given the name.
>
> The granularity is also poor, and the complete lack of idempotency is going to
> be problematic. E.g. only the first region is actually tested, and if someone
> tries to do negative testing on multiple regions, sev_register_encrypted_memory()
> will fail due to trying to re-encrypt a region.
>
> __sev_vm_launch_update() has similar issues. encrypt_region() is allowed to
> fail, but its call to KVM_SEV_LAUNCH_UPDATE_VMSA is not.
>
> And peeking ahead, passing an @assert parameter to __test_snp_launch_start() (or
> any helper) is a non-starter. Readers should not have to dive into a helper's
> implementation to understand that this
>
> __test_snp_launch_start(type, policy, 0, true);
>
> is a happy path and this
>
> ret = __test_snp_launch_start(type, policy, BIT(i), false);
>
> is a sad path.
>
> And re-creating the VM every time is absurdly wasteful. While performance isn't
> a priority for selftests, there's no reason to make everything as slow as possible.
>
> Even just passing the page type to encrypt_region() is confusing. When the test
> is actually going to run the guest, applying ZERO and CPUID types to _all_ pages
> is completely nonsensical.
>
> In general, I think trying to reuse the happy path's infrastructure is going to
> do more harm than good. This is what I was trying to get at in my feedback for
> the previous version.
>
> For negative tests, I would honestly say development them "from scratch", i.e.
> deliberately don't reuse the existing SEV-MEM/ES infrastructure. It'll require
> more copy+paste to get rolling, but I suspect that the end result will be less
> churn and far easier to read.
This makes sense. I was trying to be as minimal as possible without a
lot of replication while trying to introduce the negative tests. I see
that this has created several issues of granularity, even general
correctness and overall has created more problems than it solves.
I will try to develop the negative interface separately tailored for
this specific use-case rather than piggybacking on the happy path when I
send out the patchset #2.
Powered by blists - more mailing lists