lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 17 Apr 2024 15:55:24 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, 
	isaku.yamahata@...el.com, xiaoyao.li@...el.com, binbin.wu@...ux.intel.com, 
	chao.gao@...el.com
Subject: Re: [PATCH v2 06/10] KVM, x86: add architectural support code for #VE

KVM: x86:

On Tue, Apr 16, 2024, Paolo Bonzini wrote:
> Dump the contents of the #VE info data structure and assert that #VE does
> not happen, but do not yet do anything with it.
> 
> No functional change intended, separated for clarity only.
> 
> Extracted from a patch by Isaku Yamahata <isaku.yamahata@...el.com>.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>

..

> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 6780313914f8..2c746318c6c3 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6408,6 +6408,18 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
>  	if (secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
>  		pr_err("Virtual processor ID = 0x%04x\n",
>  		       vmcs_read16(VIRTUAL_PROCESSOR_ID));
> +	if (secondary_exec_control & SECONDARY_EXEC_EPT_VIOLATION_VE) {
> +		struct vmx_ve_information *ve_info;
> +
> +		pr_err("VE info address = 0x%016llx\n",
> +		       vmcs_read64(VE_INFORMATION_ADDRESS));
> +		ve_info = __va(vmcs_read64(VE_INFORMATION_ADDRESS));

As I pointed out in v1[*], pulling the PA->VA from the VMCS is a bad idea.  Just
use vmx->ve_info.

 : If KVM is dumping the VMCS, then something has gone wrong, possible in
 : hardware or ucode. Derefencing an address from the VMCS, which could very
 : well be corrupted, is a terrible idea.  This could easily escalate from a
 : dead VM into a dead host. 

[*] https://lore.kernel.org/all/Zd6Sy_PujXJVji0n@google.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ