lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <Y9R1+hPaTWcEZMOX@google.com> Date: Sat, 28 Jan 2023 01:10:18 +0000 From: Sean Christopherson <seanjc@...gle.com> To: Maxim Levitsky <mlevitsk@...hat.com> Cc: kvm@...r.kernel.org, Sandipan Das <sandipan.das@....com>, Paolo Bonzini <pbonzini@...hat.com>, Jim Mattson <jmattson@...gle.com>, Peter Zijlstra <peterz@...radead.org>, Dave Hansen <dave.hansen@...ux.intel.com>, Borislav Petkov <bp@...en8.de>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>, Josh Poimboeuf <jpoimboe@...nel.org>, Daniel Sneddon <daniel.sneddon@...ux.intel.com>, Jiaxi Chen <jiaxi.chen@...ux.intel.com>, Babu Moger <babu.moger@....com>, linux-kernel@...r.kernel.org, Jing Liu <jing2.liu@...el.com>, Wyes Karny <wyes.karny@....com>, x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>, Santosh Shukla <santosh.shukla@....com> Subject: Re: [PATCH v2 10/11] KVM: SVM: implement support for vNMI On Tue, Nov 29, 2022, Maxim Levitsky wrote: > This patch implements support for injecting pending > NMIs via the .kvm_x86_set_hw_nmi_pending using new AMD's vNMI > feature. > > Note that the vNMI can't cause a VM exit, which is needed > when a nested guest intercepts NMIs. > > Therefore to avoid breaking nesting, the vNMI is inhibited while > a nested guest is running and instead, the legacy NMI window > detection and delivery method is used. > > While it is possible to passthrough the vNMI if a nested guest > doesn't intercept NMIs, such usage is very uncommon, and it's > not worth to optimize for. > > Signed-off-by: Santosh Shukla <santosh.shukla@....com> > Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com> > --- > arch/x86/kvm/svm/nested.c | 42 +++++++++++++++ > arch/x86/kvm/svm/svm.c | 111 ++++++++++++++++++++++++++++++-------- > arch/x86/kvm/svm/svm.h | 10 ++++ > 3 files changed, 140 insertions(+), 23 deletions(-) > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index e891318595113e..5bea672bf8b12d 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -623,6 +623,42 @@ static bool is_evtinj_nmi(u32 evtinj) > return type == SVM_EVTINJ_TYPE_NMI; > } > > +static void nested_svm_save_vnmi(struct vcpu_svm *svm) > +{ > + struct vmcb *vmcb01 = svm->vmcb01.ptr; > + > + /* > + * Copy the vNMI state back to software NMI tracking state > + * for the duration of the nested run > + */ > + Unecessary newline. > + svm->nmi_masked = vmcb01->control.int_ctl & V_NMI_MASK; > + svm->vcpu.arch.nmi_pending += vmcb01->control.int_ctl & V_NMI_PENDING; > +} > + > +static void nested_svm_restore_vnmi(struct vcpu_svm *svm) > +{ > + struct kvm_vcpu *vcpu = &svm->vcpu; > + struct vmcb *vmcb01 = svm->vmcb01.ptr; > + > + /* > + * Restore the vNMI state from the software NMI tracking state > + * after a nested run > + */ > + Unnecessary newline. > + if (svm->nmi_masked) > + vmcb01->control.int_ctl |= V_NMI_MASK; > + else > + vmcb01->control.int_ctl &= ~V_NMI_MASK; > + > + if (vcpu->arch.nmi_pending) { > + vcpu->arch.nmi_pending--; > + vmcb01->control.int_ctl |= V_NMI_PENDING; > + } else > + vmcb01->control.int_ctl &= ~V_NMI_PENDING; Needs curly braces.
Powered by blists - more mailing lists