[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3349e153-83ae-3c55-ee88-2036b2ce38d8@redhat.com>
Date: Tue, 26 Jan 2021 12:39:28 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Wei Huang <wei.huang2@....com>, kvm@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, vkuznets@...hat.com,
mlevitsk@...hat.com, seanjc@...gle.com, joro@...tes.org,
bp@...en8.de, tglx@...utronix.de, mingo@...hat.com, x86@...nel.org,
jmattson@...gle.com, wanpengli@...cent.com, bsd@...hat.com,
dgilbert@...hat.com, luto@...capital.net
Subject: Re: [PATCH v3 0/4] Handle #GP for SVM execution instructions
On 26/01/21 09:18, Wei Huang wrote:
> While running SVM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD
> CPUs check EAX against reserved memory regions (e.g. SMM memory on host)
> before checking VMCB's instruction intercept. If EAX falls into such
> memory areas, #GP is triggered before #VMEXIT. This causes unexpected #GP
> under nested virtualization. To solve this problem, this patchset makes
> KVM trap #GP and emulate these SVM instuctions accordingly.
>
> Also newer AMD CPUs will change this behavior by triggering #VMEXIT
> before #GP. This change is indicated by CPUID_0x8000000A_EDX[28]. Under
> this circumstance, #GP interception is not required. This patchset supports
> the new feature.
>
> This patchset has been verified with vmrun_errata_test and vmware_backdoor
> tests of kvm_unit_test on the following configs. Also it was verified that
> vmware_backdoor can be turned on under nested on nested.
> * Current CPU: nested, nested on nested
> * New CPU with X86_FEATURE_SVME_ADDR_CHK: nested, nested on nested
>
> v2->v3:
> * Change the decode function name to x86_decode_emulated_instruction()
> * Add a new variable, svm_gp_erratum_intercept, to control interception
> * Turn on VM's X86_FEATURE_SVME_ADDR_CHK feature in svm_set_cpu_caps()
> * Fix instruction emulation for vmware_backdoor under nested-on-nested
> * Minor comment fixes
>
> v1->v2:
> * Factor out instruction decode for sharing
> * Re-org gp_interception() handling for both #GP and vmware_backdoor
> * Use kvm_cpu_cap for X86_FEATURE_SVME_ADDR_CHK feature support
> * Add nested on nested support
>
> Thanks,
> -Wei
>
> Wei Huang (4):
> KVM: x86: Factor out x86 instruction emulation with decoding
> KVM: SVM: Add emulation support for #GP triggered by SVM instructions
> KVM: SVM: Add support for SVM instruction address check change
> KVM: SVM: Support #GP handling for the case of nested on nested
>
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/kvm/svm/svm.c | 128 +++++++++++++++++++++++++----
> arch/x86/kvm/x86.c | 62 ++++++++------
> arch/x86/kvm/x86.h | 2 +
> 4 files changed, 152 insertions(+), 41 deletions(-)
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists