lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Sep 2022 14:19:02 -0700
From:   David Matlack <dmatlack@...gle.com>
To:     Vishal Annapurve <vannapurve@...gle.com>
Cc:     x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-kselftest@...r.kernel.org, pbonzini@...hat.com,
        shuah@...nel.org, bgardon@...gle.com, seanjc@...gle.com,
        oupton@...gle.com, peterx@...hat.com, vkuznets@...hat.com
Subject: Re: [V2 PATCH 4/8] KVM: selftests: x86: Precompute the result for
 is_{intel,amd}_cpu()

On Thu, Sep 15, 2022 at 12:04:44AM +0000, Vishal Annapurve wrote:
> Cache the vendor CPU type in a global variable so that multiple calls
> to is_intel_cpu() do not need to re-execute CPUID.
> 
> Add cpu vendor check in kvm_hypercall() so that it executes correct
> vmcall/vmmcall instruction when running on Intel/AMD hosts. This avoids
> exit to KVM which anyway tries to patch the instruction according to
> the cpu type.

The commit shortlog makes no mention (nor even implies) that this commit
adds AMD support to kvm_hypercall(). Please break this commit up into 2.
One to precompute the result of is_{intel,amd}_cpu() and one to add AMD
support to kvm_hypercall().

If you really want to keep this as one commit (I don't know what the
benefit would be), please change the shortlog and commit message to
focus on the kvm_hypercall() change, as that is the real goal of this
commit. The precomputation is arguably and implementation detail. e.g.

  KVM: selftest: Add support for AMD to kvm_hypercall()

  Make it possible to use kvm_hypercall() on AMD by checking if running
  on an AMD CPU and, if so, using vmmcall instead of vmcall. In order to
  avoid executing CPUID in the guest on every call t kvm_hypercall()
  (which would be slow), pre-compute the result of is_{intel,amd}_cpu()
  as part of kvm_selftest_arch_init() and sync it into the guest
  after loading the ELF image.

But again, it'd be cleaner just to split it up. Caching the result of
is_{intel,amd}_cpu() is useful in its own right, independent of the
kvm_hypercall() change.

> 
> As part of this change, sync the global variable is_cpu_amd into the
> guest so the guest can determine which hypercall instruction to use
> without needing to re-execute CPUID for every hypercall.
> 
> Suggested-by: Sean Christopherson <seanjc@...gle.com>
> Signed-off-by: Vishal Annapurve <vannapurve@...gle.com>
> ---
>  .../testing/selftests/kvm/lib/x86_64/processor.c  | 15 ++++++++++++---
>  1 file changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 25ae972f5c71..c0ae938772f6 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -19,6 +19,7 @@
>  #define MAX_NR_CPUID_ENTRIES 100
>  
>  vm_vaddr_t exception_handlers;
> +static bool is_cpu_amd;
>  
>  static void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent)
>  {
> @@ -1019,7 +1020,7 @@ static bool cpu_vendor_string_is(const char *vendor)
>  
>  bool is_intel_cpu(void)
>  {
> -	return cpu_vendor_string_is("GenuineIntel");
> +	return (is_cpu_amd == false);
>  }
>  
>  /*
> @@ -1027,7 +1028,7 @@ bool is_intel_cpu(void)
>   */
>  bool is_amd_cpu(void)
>  {
> -	return cpu_vendor_string_is("AuthenticAMD");
> +	return (is_cpu_amd == true);
>  }
>  
>  void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
> @@ -1182,9 +1183,15 @@ uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
>  {
>  	uint64_t r;
>  
> -	asm volatile("vmcall"
> +	if (is_amd_cpu())
> +		asm volatile("vmmcall"
>  		     : "=a"(r)
>  		     : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
> +	else
> +		asm volatile("vmcall"
> +		     : "=a"(r)
> +		     : "a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
> +
>  	return r;
>  }
>  
> @@ -1314,8 +1321,10 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm)
>  
>  void kvm_selftest_arch_init(void)
>  {
> +	is_cpu_amd = cpu_vendor_string_is("AuthenticAMD");
>  }
>  
>  void kvm_arch_post_vm_elf_load(struct kvm_vm *vm)
>  {
> +	sync_global_to_guest(vm, is_cpu_amd);
>  }
> -- 
> 2.37.2.789.g6183377224-goog
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ