lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zw2Yz3mOMYggOPKK@google.com>
Date: Mon, 14 Oct 2024 15:18:55 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: "Pratik R. Sampat" <pratikrajesh.sampat@....com>
Cc: kvm@...r.kernel.org, pbonzini@...hat.com, pgonda@...gle.com, 
	thomas.lendacky@....com, michael.roth@....com, shuah@...nel.org, 
	linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/9] KVM: selftests: Decouple SEV ioctls from asserts

On Thu, Sep 05, 2024, Pratik R. Sampat wrote:
> +static inline int __sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
> +					   uint64_t hva, uint64_t size)
>  {
>  	struct kvm_sev_launch_update_data update_data = {
> -		.uaddr = (unsigned long)addr_gpa2hva(vm, gpa),
> +		.uaddr = hva,
>  		.len = size,
>  	};
>  
> -	vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &update_data);
> +	return __vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &update_data);
> +}
> +
> +static inline void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
> +					  uint64_t hva, uint64_t size)
> +{
> +	int ret = __sev_launch_update_data(vm, gpa, hva, size);
> +
> +	TEST_ASSERT_VM_VCPU_IOCTL(!ret, KVM_SEV_LAUNCH_UPDATE_DATA, ret, vm);
>  }
>  
>  #endif /* SELFTEST_KVM_SEV_H */
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c
> index e9535ee20b7f..125a72246e09 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c
> @@ -14,15 +14,16 @@
>   * and find the first range, but that's correct because the condition
>   * expression would cause us to quit the loop.
>   */
> -static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region)
> +static int encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region)

This is all kinds of wrong.  encrypt_region() should never fail.  And by allowing
it to fail, any unexpected failure becomes harder to debug.  It's also a lie,
because sev_register_encrypted_memory() isn't allowed to fail, and I would bet
that most readers would expect _that_ call to fail given the name.

The granularity is also poor, and the complete lack of idempotency is going to
be problematic.  E.g. only the first region is actually tested, and if someone
tries to do negative testing on multiple regions, sev_register_encrypted_memory()
will fail due to trying to re-encrypt a region.

__sev_vm_launch_update() has similar issues.  encrypt_region() is allowed to
fail, but its call to KVM_SEV_LAUNCH_UPDATE_VMSA is not.

And peeking ahead, passing an @assert parameter to __test_snp_launch_start() (or
any helper) is a non-starter.  Readers should not have to dive into a helper's
implementation to understand that this

  __test_snp_launch_start(type, policy, 0, true);

is a happy path and this

  ret = __test_snp_launch_start(type, policy, BIT(i), false);

is a sad path.

And re-creating the VM every time is absurdly wasteful.  While performance isn't
a priority for selftests, there's no reason to make everything as slow as possible.

Even just passing the page type to encrypt_region() is confusing.  When the test
is actually going to run the guest, applying ZERO and CPUID types to _all_ pages
is completely nonsensical.

In general, I think trying to reuse the happy path's infrastructure is going to
do more harm than good.  This is what I was trying to get at in my feedback for
the previous version.

For negative tests, I would honestly say development them "from scratch", i.e.
deliberately don't reuse the existing SEV-MEM/ES infrastructure.  It'll require
more copy+paste to get rolling, but I suspect that the end result will be less
churn and far easier to read.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ