lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 23 Apr 2020 07:42:09 -0700 From: Sean Christopherson <sean.j.christopherson@...el.com> To: Cathy Avery <cavery@...hat.com> Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, pbonzini@...hat.com, vkuznets@...hat.com, wei.huang2@....com Subject: Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote: > With NMI intercept moved to check_nested_events there is a race > condition where vcpu->arch.nmi_pending is set late causing How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set it because the current KVM_RUN thread holds the mutex, and the only other call to process_nmi() is in the request path of vcpu_enter_guest, which has already executed. > the execution of check_nested_events to not setup correctly > for nested.exit_required. A second call to check_nested_events > allows the injectable nmi to be detected in time in order to > require immediate exit from L2 to L1. > > Signed-off-by: Cathy Avery <cavery@...hat.com> > --- > arch/x86/kvm/x86.c | 15 +++++++++++---- > 1 file changed, 11 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 027dfd278a97..ecfafcd93536 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7734,10 +7734,17 @@ static int inject_pending_event(struct kvm_vcpu *vcpu) > vcpu->arch.smi_pending = false; > ++vcpu->arch.smi_count; > enter_smm(vcpu); > - } else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) { > - --vcpu->arch.nmi_pending; > - vcpu->arch.nmi_injected = true; > - kvm_x86_ops.set_nmi(vcpu); > + } else if (vcpu->arch.nmi_pending) { > + if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) { > + r = kvm_x86_ops.check_nested_events(vcpu); > + if (r != 0) > + return r; > + } > + if (kvm_x86_ops.nmi_allowed(vcpu)) { > + --vcpu->arch.nmi_pending; > + vcpu->arch.nmi_injected = true; > + kvm_x86_ops.set_nmi(vcpu); > + } > } else if (kvm_cpu_has_injectable_intr(vcpu)) { > /* > * Because interrupts can be injected asynchronously, we are > -- > 2.20.1 >
Powered by blists - more mailing lists