[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cd9be62e3c2018a4f779f65fed46954e9431e0b0.camel@redhat.com>
Date: Wed, 06 Jul 2022 15:04:10 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Oliver Upton <oupton@...gle.com>,
Peter Shier <pshier@...gle.com>
Subject: Re: [PATCH v2 13/21] KVM: x86: Formalize blocking of nested pending
exceptions
On Tue, 2022-06-14 at 20:47 +0000, Sean Christopherson wrote:
> Capture nested_run_pending as block_pending_exceptions so that the logic
> of why exceptions are blocked only needs to be documented once instead of
> at every place that employs the logic.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/svm/nested.c | 20 ++++++++++----------
> arch/x86/kvm/vmx/nested.c | 23 ++++++++++++-----------
> 2 files changed, 22 insertions(+), 21 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 471d40e97890..460161e67ce5 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1347,10 +1347,16 @@ static inline bool nested_exit_on_init(struct vcpu_svm *svm)
>
> static int svm_check_nested_events(struct kvm_vcpu *vcpu)
> {
> - struct vcpu_svm *svm = to_svm(vcpu);
> - bool block_nested_events =
> - kvm_event_needs_reinjection(vcpu) || svm->nested.nested_run_pending;
> struct kvm_lapic *apic = vcpu->arch.apic;
> + struct vcpu_svm *svm = to_svm(vcpu);
> + /*
> + * Only a pending nested run blocks a pending exception. If there is a
> + * previously injected event, the pending exception occurred while said
> + * event was being delivered and thus needs to be handled.
> + */
Tiny nitpick about the comment:
One can say that if there is an injected event, this means that we
are in the middle of handling it, thus we are not on instruction boundary,
and thus we don't process events (e.g interrupts).
So maybe write something like that?
> + bool block_nested_exceptions = svm->nested.nested_run_pending;
> + bool block_nested_events = block_nested_exceptions ||
> + kvm_event_needs_reinjection(vcpu);
Tiny nitpick: I don't like that much the name 'nested' as
it can also mean a nested exception (e.g exception that
happened while jumping to an exception handler).
Here we mean just exception/events for the guest, so I would suggest
to just drop the word 'nested'.
>
> if (lapic_in_kernel(vcpu) &&
> test_bit(KVM_APIC_INIT, &apic->pending_events)) {
> @@ -1363,13 +1369,7 @@ static int svm_check_nested_events(struct kvm_vcpu *vcpu)
> }
>
> if (vcpu->arch.exception.pending) {
> - /*
> - * Only a pending nested run can block a pending exception.
> - * Otherwise an injected NMI/interrupt should either be
> - * lost or delivered to the nested hypervisor in the EXITINTINFO
> - * vmcb field, while delivering the pending exception.
> - */
> - if (svm->nested.nested_run_pending)
> + if (block_nested_exceptions)
> return -EBUSY;
> if (!nested_exit_on_exception(svm))
> return 0;
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index fafdcbfeca1f..50fe66f0cc1b 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -3903,11 +3903,17 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu)
>
> static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
> {
> - struct vcpu_vmx *vmx = to_vmx(vcpu);
> - unsigned long exit_qual;
> - bool block_nested_events =
> - vmx->nested.nested_run_pending || kvm_event_needs_reinjection(vcpu);
> struct kvm_lapic *apic = vcpu->arch.apic;
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
> + unsigned long exit_qual;
> + /*
> + * Only a pending nested run blocks a pending exception. If there is a
> + * previously injected event, the pending exception occurred while said
> + * event was being delivered and thus needs to be handled.
> + */
> + bool block_nested_exceptions = vmx->nested.nested_run_pending;
> + bool block_nested_events = block_nested_exceptions ||
> + kvm_event_needs_reinjection(vcpu);
>
> if (lapic_in_kernel(vcpu) &&
> test_bit(KVM_APIC_INIT, &apic->pending_events)) {
> @@ -3941,15 +3947,10 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
> * Process exceptions that are higher priority than Monitor Trap Flag:
> * fault-like exceptions, TSS T flag #DB (not emulated by KVM, but
> * could theoretically come in from userspace), and ICEBP (INT1).
> - *
> - * Note that only a pending nested run can block a pending exception.
> - * Otherwise an injected NMI/interrupt should either be
> - * lost or delivered to the nested hypervisor in the IDT_VECTORING_INFO,
> - * while delivering the pending exception.
> */
> if (vcpu->arch.exception.pending &&
> !(vmx_get_pending_dbg_trap(vcpu) & ~DR6_BT)) {
> - if (vmx->nested.nested_run_pending)
> + if (block_nested_exceptions)
> return -EBUSY;
> if (!nested_vmx_check_exception(vcpu, &exit_qual))
> goto no_vmexit;
> @@ -3966,7 +3967,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
> }
>
> if (vcpu->arch.exception.pending) {
> - if (vmx->nested.nested_run_pending)
> + if (block_nested_exceptions)
> return -EBUSY;
> if (!nested_vmx_check_exception(vcpu, &exit_qual))
> goto no_vmexit;
Besides the nitpicks:
Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists