lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 Apr 2022 16:15:38 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     "Maciej S. Szmigiero" <mail@...iej.szmigiero.name>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/8] KVM: nSVM: Sync next_rip field from vmcb12 to vmcb02

On Wed, Apr 20, 2022, Maciej S. Szmigiero wrote:
> On 20.04.2022 17:00, Paolo Bonzini wrote:
> > On 4/4/22 19:21, Sean Christopherson wrote:
> > > On Mon, Apr 04, 2022, Maciej S. Szmigiero wrote:
> > > > > @@ -1606,7 +1622,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
> > > > >        nested_copy_vmcb_control_to_cache(svm, ctl);
> > > > >        svm_switch_vmcb(svm, &svm->nested.vmcb02);
> > > > > -    nested_vmcb02_prepare_control(svm);
> > > > > +    nested_vmcb02_prepare_control(svm, save->rip);
> > > > 
> > > >                        ^
> > > > I guess this should be "svm->vmcb->save.rip", since
> > > > KVM_{GET,SET}_NESTED_STATE "save" field contains vmcb01 data,
> > > > not vmcb{0,1}2 (in contrast to the "control" field).
> > > 
> > > Argh, yes.  Is userspace required to set L2 guest state prior to KVM_SET_NESTED_STATE?
> > > If not, this will result in garbage being loaded into vmcb02.
> > > 
> > 
> > Let's just require X86_FEATURE_NRIPS, either in general or just to
> > enable nested virtualiazation
> 
> 👍

Hmm, so requiring NRIPS for nested doesn't actually buy us anything.  KVM still
has to deal with userspace hiding NRIPS from L1, so unless I'm overlooking something,
the only change would be:

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index bdf8375a718b..7bed4e05aaea 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -686,7 +686,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
         */
        if (svm->nrips_enabled)
                vmcb02->control.next_rip    = svm->nested.ctl.next_rip;
-       else if (boot_cpu_has(X86_FEATURE_NRIPS))
+       else
                vmcb02->control.next_rip    = vmcb12_rip;

        if (is_evtinj_soft(vmcb02->control.event_inj)) {

And sadly, because SVM doesn't provide the instruction length if an exit occurs
while vectoring a software interrupt/exception, making NRIPS mandatory doesn't buy
us much either.

I believe the below diff is the total savings (plus the above nested thing) against
this series if NRIPS is mandatory (ignoring the setup code, which is a wash).  It
does eliminate the rewind in svm_complete_soft_interrupt() and the funky logic in
svm_update_soft_interrupt_rip(), but that's it AFAICT.  The most obnoxious code of
having to unwind EMULTYPE_SKIP when retrieving the next RIP for software int/except
injection doesn't go away :-(

I'm not totally opposed to requiring NRIPS, but I'm not in favor of it either.

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 66cfd533aaf8..6b48af423246 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -354,7 +354,7 @@ static int __svm_skip_emulated_instruction(struct kvm_vcpu *vcpu,
        if (sev_es_guest(vcpu->kvm))
                goto done;

-       if (nrips && svm->vmcb->control.next_rip != 0) {
+       if (svm->vmcb->control.next_rip != 0) {
                WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS));
                svm->next_rip = svm->vmcb->control.next_rip;
        }
@@ -401,7 +401,7 @@ static int svm_update_soft_interrupt_rip(struct kvm_vcpu *vcpu)
         * in use, the skip must not commit any side effects such as clearing
         * the interrupt shadow or RFLAGS.RF.
         */
-       if (!__svm_skip_emulated_instruction(vcpu, !nrips))
+       if (!__svm_skip_emulated_instruction(vcpu, false))
                return -EIO;

        rip = kvm_rip_read(vcpu);
@@ -420,11 +420,8 @@ static int svm_update_soft_interrupt_rip(struct kvm_vcpu *vcpu)
        svm->soft_int_old_rip = old_rip;
        svm->soft_int_next_rip = rip;

-       if (nrips)
-               kvm_rip_write(vcpu, old_rip);
-
-       if (static_cpu_has(X86_FEATURE_NRIPS))
-               svm->vmcb->control.next_rip = rip;
+       kvm_rip_write(vcpu, old_rip);
+       svm->vmcb->control.next_rip = rip;

        return 0;
 }
@@ -3738,20 +3735,9 @@ static void svm_complete_soft_interrupt(struct kvm_vcpu *vcpu, u8 vector,
         * the same event, i.e. if the event is a soft exception/interrupt,
         * otherwise next_rip is unused on VMRUN.
         */
-       if (nrips && (is_soft || (is_exception && kvm_exception_is_soft(vector))) &&
+       if ((is_soft || (is_exception && kvm_exception_is_soft(vector))) &&
            kvm_is_linear_rip(vcpu, svm->soft_int_old_rip + svm->soft_int_csbase))
                svm->vmcb->control.next_rip = svm->soft_int_next_rip;
-       /*
-        * If NextRIP isn't enabled, KVM must manually advance RIP prior to
-        * injecting the soft exception/interrupt.  That advancement needs to
-        * be unwound if vectoring didn't complete.  Note, the new event may
-        * not be the injected event, e.g. if KVM injected an INTn, the INTn
-        * hit a #NP in the guest, and the #NP encountered a #PF, the #NP will
-        * be the reported vectored event, but RIP still needs to be unwound.
-        */
-       else if (!nrips && (is_soft || is_exception) &&
-                kvm_is_linear_rip(vcpu, svm->soft_int_next_rip + svm->soft_int_csbase))
-               kvm_rip_write(vcpu, svm->soft_int_old_rip);
 }

 static void svm_complete_interrupts(struct kvm_vcpu *vcpu)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ