[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9hybI65So5X2LFg@google.com>
Date: Tue, 31 Jan 2023 01:44:12 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Maxim Levitsky <mlevitsk@...hat.com>
Cc: kvm@...r.kernel.org, Sandipan Das <sandipan.das@....com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Borislav Petkov <bp@...en8.de>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Daniel Sneddon <daniel.sneddon@...ux.intel.com>,
Jiaxi Chen <jiaxi.chen@...ux.intel.com>,
Babu Moger <babu.moger@....com>, linux-kernel@...r.kernel.org,
Jing Liu <jing2.liu@...el.com>,
Wyes Karny <wyes.karny@....com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v2 02/11] KVM: nSVM: clean up the copying of V_INTR bits
from vmcb02 to vmcb12
On Sat, Jan 28, 2023, Sean Christopherson wrote:
> On Tue, Nov 29, 2022, Maxim Levitsky wrote:
> > the V_IRQ and v_TPR bits don't exist when virtual interrupt
> > masking is not enabled, therefore the KVM should not copy these
> > bits regardless of V_IRQ intercept.
>
> Hmm, the APM disagrees:
>
> The APIC's TPR always controls the task priority for physical interrupts, and the
> V_TPR always controls virtual interrupts.
>
> While running a guest with V_INTR_MASKING cleared to 0:
> • Writes to CR8 affect both the APIC's TPR and the V_TPR register.
>
>
> ...
>
> The three VMCB fields V_IRQ, V_INTR_PRIO, and V_INTR_VECTOR indicate whether there
> is a virtual interrupt pending, and, if so, what its vector number and priority are.
>
> IIUC, V_INTR_MASKING_MASK is mostly about EFLAGS.IF, with a small side effect on
> TPR. E.g. a VMM could pend a V_IRQ but clear V_INTR_MASKING and expect the guest
> to take the V_IRQ. At least, that's my reading of things.
>
> > Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> > ---
> > arch/x86/kvm/svm/nested.c | 23 ++++++++---------------
> > 1 file changed, 8 insertions(+), 15 deletions(-)
> >
> > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> > index 37af0338da7c32..aad3145b2f62fe 100644
> > --- a/arch/x86/kvm/svm/nested.c
> > +++ b/arch/x86/kvm/svm/nested.c
> > @@ -412,24 +412,17 @@ void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm,
> > */
> > void nested_sync_control_from_vmcb02(struct vcpu_svm *svm)
> > {
> > - u32 mask;
> > + u32 mask = 0;
> > svm->nested.ctl.event_inj = svm->vmcb->control.event_inj;
> > svm->nested.ctl.event_inj_err = svm->vmcb->control.event_inj_err;
> >
> > - /* Only a few fields of int_ctl are written by the processor. */
> > - mask = V_IRQ_MASK | V_TPR_MASK;
> > - if (!(svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) &&
> > - svm_is_intercept(svm, INTERCEPT_VINTR)) {
> > - /*
> > - * In order to request an interrupt window, L0 is usurping
> > - * svm->vmcb->control.int_ctl and possibly setting V_IRQ
> > - * even if it was clear in L1's VMCB. Restoring it would be
> > - * wrong. However, in this case V_IRQ will remain true until
> > - * interrupt_window_interception calls svm_clear_vintr and
> > - * restores int_ctl. We can just leave it aside.
> > - */
> > - mask &= ~V_IRQ_MASK;
Argh! *shakes fist at KVM and SVM*
This is ridiculously convoluted, and I'm pretty sure there are existing bugs. If
L1 runs L2 with V_IRQ=1 and V_INTR_MASKING=1, and KVM requests an interrupt window,
then KVM will overwrite vmcb02's int_vector and int_ctl, i.e. clobber L1's V_IRQ,
but then silently clear INTERCEPT_VINTR in recalc_intercepts() and thus prevent
svm_clear_vintr() from being reached, i.e. prevent restoring L1's V_IRQ.
Bug #1 is that KVM shouldn't clobber the V_IRQ fields if KVM ultimately decides
not to open an interrupt window. Bug #2 is that KVM needs to open an interrupt
window if save.RFLAGS.IF=1, as interrupts may become unblocked in that case,
e.g. if L2 is in an interrupt shadow.
So I think this over two patches?
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 05d38944a6c0..ad1e70ac8669 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -139,13 +139,18 @@ void recalc_intercepts(struct vcpu_svm *svm)
if (g->int_ctl & V_INTR_MASKING_MASK) {
/*
- * Once running L2 with HF_VINTR_MASK, EFLAGS.IF and CR8
- * does not affect any interrupt we may want to inject;
- * therefore, writes to CR8 are irrelevant to L0, as are
- * interrupt window vmexits.
+ * If L2 is active and V_INTR_MASKING is enabled in vmcb12,
+ * disable intercept of CR8 writes as L2's CR8 does not affect
+ * any interrupt KVM may want to inject.
+ *
+ * Similarly, disable intercept of virtual interrupts (used to
+ * detect interrupt windows) if the saved RFLAGS.IF is '0', as
+ * the effective RFLAGS.IF for L1 interrupts will never be set
+ * while L2 is running (L2's RFLAGS.IF doesn't affect L1 IRQs).
*/
vmcb_clr_intercept(c, INTERCEPT_CR8_WRITE);
- vmcb_clr_intercept(c, INTERCEPT_VINTR);
+ if (!(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF))
+ vmcb_clr_intercept(c, INTERCEPT_VINTR);
}
/*
@@ -416,18 +421,18 @@ void nested_sync_control_from_vmcb02(struct vcpu_svm *svm)
/* Only a few fields of int_ctl are written by the processor. */
mask = V_IRQ_MASK | V_TPR_MASK;
- if (!(svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) &&
- svm_is_intercept(svm, INTERCEPT_VINTR)) {
- /*
- * In order to request an interrupt window, L0 is usurping
- * svm->vmcb->control.int_ctl and possibly setting V_IRQ
- * even if it was clear in L1's VMCB. Restoring it would be
- * wrong. However, in this case V_IRQ will remain true until
- * interrupt_window_interception calls svm_clear_vintr and
- * restores int_ctl. We can just leave it aside.
- */
+
+ /*
+ * Don't sync vmcb02 V_IRQ back to vmcb12 if KVM (L0) is intercepting
+ * virtual interrupts in order to request an interrupt window, as KVM
+ * has usurped vmcb02's int_ctl. If an interrupt window opens before
+ * the next VM-Exit, svm_clear_vintr() will restore vmcb12's int_ctl.
+ * If no window opens, V_IRQ will be correctly preserved in vmcb12's
+ * int_ctl (because it was never recognized while L2 was running).
+ */
+ if (svm_is_intercept(svm, INTERCEPT_VINTR) &&
+ !test_bit(INTERCEPT_VINTR, (unsigned long *)svm->nested.ctl.intercepts))
mask &= ~V_IRQ_MASK;
- }
if (nested_vgif_enabled(svm))
mask |= V_GIF_MASK;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index b103fe7cbc82..59d2891662ef 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1580,6 +1580,16 @@ static void svm_set_vintr(struct vcpu_svm *svm)
svm_set_intercept(svm, INTERCEPT_VINTR);
+ /*
+ * Recalculating intercepts may have clear the VINTR intercept. If
+ * V_INTR_MASKING is enabled in vmcb12, then the effective RFLAGS.IF
+ * for L1 physical interrupts is L1's RFLAGS.IF at the time of VMRUN.
+ * Requesting an interrupt window if save.RFLAGS.IF=0 is pointless as
+ * interrupts will never be unblocked while L2 is running.
+ */
+ if (!svm_is_intercept(svm, INTERCEPT_VINTR))
+ return;
+
/*
* This is just a dummy VINTR to actually cause a vmexit to happen.
* Actual injection of virtual interrupts happens through EVENTINJ.
Powered by blists - more mailing lists