[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zhbdw02i.fsf@vitty.brq.redhat.com>
Date: Wed, 15 Apr 2020 11:49:25 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Cathy Avery <cavery@...hat.com>, pbonzini@...hat.com
Cc: wei.huang2@....com, Jim Mattson <jmattson@...gle.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI
Cathy Avery <cavery@...hat.com> writes:
> Moved nested NMI exit to new check_nested_events.
> The second patch fixes the NMI pending race condition that now occurs.
>
> Cathy Avery (2):
> KVM: SVM: Implement check_nested_events for NMI
> KVM: x86: check_nested_events if there is an injectable NMI
>
Not directly related to this series but I just noticed that we have the
following comment in inject_pending_event():
/* try to inject new event if pending */
if (vcpu->arch.exception.pending) {
...
if (vcpu->arch.exception.nr == DB_VECTOR) {
/*
* This code assumes that nSVM doesn't use
* check_nested_events(). If it does, the
* DR6/DR7 changes should happen before L1
* gets a #VMEXIT for an intercepted #DB in
* L2. (Under VMX, on the other hand, the
* DR6/DR7 changes should not happen in the
* event of a VM-exit to L1 for an intercepted
* #DB in L2.)
*/
kvm_deliver_exception_payload(vcpu);
if (vcpu->arch.dr7 & DR7_GD) {
vcpu->arch.dr7 &= ~DR7_GD;
kvm_update_dr7(vcpu);
}
}
kvm_x86_ops.queue_exception(vcpu);
}
As we already implement check_nested_events() on SVM, do we need to do
anything here? CC: Jim who added the guardian (f10c729ff9652).
> arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++
> arch/x86/kvm/svm/svm.c | 2 +-
> arch/x86/kvm/svm/svm.h | 15 ---------------
> arch/x86/kvm/x86.c | 15 +++++++++++----
> 4 files changed, 33 insertions(+), 20 deletions(-)
--
Vitaly
Powered by blists - more mailing lists