[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c020e65d-9528-dab4-a577-3564f939c39d@redhat.com>
Date: Tue, 1 Mar 2022 18:10:33 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sasha Levin <sashal@...nel.org>, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Cc: Maxim Levitsky <mlevitsk@...hat.com>, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, kvm@...r.kernel.org
Subject: Re: [PATCH MANUALSEL 5.10 2/2] KVM: x86: nSVM: deal with L1
hypervisor that intercepts interrupts but lets L2 control them
On 2/22/22 15:05, Sasha Levin wrote:
> From: Maxim Levitsky <mlevitsk@...hat.com>
>
> [ Upstream commit 2b0ecccb55310a4b8ad5d59c703cf8c821be6260 ]
>
> Fix a corner case in which the L1 hypervisor intercepts
> interrupts (INTERCEPT_INTR) and either doesn't set
> virtual interrupt masking (V_INTR_MASKING) or enters a
> nested guest with EFLAGS.IF disabled prior to the entry.
>
> In this case, despite the fact that L1 intercepts the interrupts,
> KVM still needs to set up an interrupt window to wait before
> injecting the INTR vmexit.
>
> Currently the KVM instead enters an endless loop of 'req_immediate_exit'.
>
> Exactly the same issue also happens for SMIs and NMI.
> Fix this as well.
>
> Note that on VMX, this case is impossible as there is only
> 'vmexit on external interrupts' execution control which either set,
> in which case both host and guest's EFLAGS.IF
> are ignored, or not set, in which case no VMexits are delivered.
>
> Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> Message-Id: <20220207155447.840194-8-mlevitsk@...hat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> Signed-off-by: Sasha Levin <sashal@...nel.org>
> ---
> arch/x86/kvm/svm/svm.c | 17 +++++++++++++----
> 1 file changed, 13 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index d515c8e68314c..ec9586a30a50c 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3237,11 +3237,13 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> if (svm->nested.nested_run_pending)
> return -EBUSY;
>
> + if (svm_nmi_blocked(vcpu))
> + return 0;
> +
> /* An NMI must not be injected into L2 if it's supposed to VM-Exit. */
> if (for_injection && is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
> return -EBUSY;
> -
> - return !svm_nmi_blocked(vcpu);
> + return 1;
> }
>
> static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
> @@ -3293,9 +3295,13 @@ bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
> static int svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
> +
> if (svm->nested.nested_run_pending)
> return -EBUSY;
>
> + if (svm_interrupt_blocked(vcpu))
> + return 0;
> +
> /*
> * An IRQ must not be injected into L2 if it's supposed to VM-Exit,
> * e.g. if the IRQ arrived asynchronously after checking nested events.
> @@ -3303,7 +3309,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> if (for_injection && is_guest_mode(vcpu) && nested_exit_on_intr(svm))
> return -EBUSY;
>
> - return !svm_interrupt_blocked(vcpu);
> + return 1;
> }
>
> static void enable_irq_window(struct kvm_vcpu *vcpu)
> @@ -4023,11 +4029,14 @@ static int svm_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
> if (svm->nested.nested_run_pending)
> return -EBUSY;
>
> + if (svm_smi_blocked(vcpu))
> + return 0;
> +
> /* An SMI must not be injected into L2 if it's supposed to VM-Exit. */
> if (for_injection && is_guest_mode(vcpu) && nested_exit_on_smi(svm))
> return -EBUSY;
>
> - return !svm_smi_blocked(vcpu);
> + return 1;
> }
>
> static int svm_pre_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
Acked-by: Paolo Bonzini <pbonzini@...hat.com>
Paolo
Powered by blists - more mailing lists