lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL715WJEkT6heVT1P2RZw_5NxBcORCrBTS60L_RZT-05zr_zsw@mail.gmail.com>
Date:   Thu, 25 Aug 2022 13:35:06 -0700
From:   Mingwei Zhang <mizhang@...gle.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     Jim Mattson <jmattson@...gle.com>,
        Maxim Levitsky <mlevitsk@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Joerg Roedel <joro@...tes.org>, kvm <kvm@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Oliver Upton <oupton@...gle.com>
Subject: Re: [PATCH 1/5] KVM: x86: Get vmcs12 pages before checking pending interrupts

> There are two uses of KVM_REQ_GET_NESTED_STATE_PAGES:
>
>   1. Defer loads when leaving SMM.
>
>   2: Defer loads for KVM_SET_NESTED_STATE.
>
> #1 is fully solvable without a request, e.g. split ->leave_smm() into two helpers,
> one that restores whatever metadata is needed before restoring from SMRAM, and
> a second to load guest virtualization state that _after_ restoring all other guest
> state from SMRAM.
>
> #2 is done because of the reasons Jim listed above, which are specific to demand
> paging (including userfaultfd).  There might be some interactions with other
> ioctls() (KVM_SET_SREGS?) that are papered over by the request, but that can be
> solved without a full request since only the first KVM_RUN after KVM_SET_NESTED_STATE
> needs to refresh things (though ideally we'd avoid that).

Ack on the fact that the 2-step process is specific to demand paging.

Currently, KVM_SET_NESTED_STATE is a two-step process in which the 1st
step does not require vmcs12 to be ready. So, I am thinking about what
it means to deprecate KVM_REQ_GET_NESTED_STATE_PAGES?

For case #2, I think there might be two options if we deprecate it:

 - Ensuring vmcs12 ready during the call to
ioctl(KVM_SET_NESTED_STATE). This requires, as Jim mentioned, that the
thread who is listening to the remote page request ready to serve
before this call (this is true regardless of uffd based or Google base
demand paging). We definitely can solve this ordering problem, but
that is beyond KVM scope. It basically requires our userspace to
cooperate.

 - Ensuring vmcs12 ready before vmenter. This basically defers the
vmcs12 checks to the last second. I think this might be a better one.
However, isn't it the same as the original implementation, i.e.,
instead of using KVM_REQ_GET_NESTED_STATE_PAGES, we have to use some
other flags to tell KVM to load a vmcs12?

Thanks.
-Mingwei
>
> In other words, if the demand paging use case goes away, then KVM can get rid of
> KVM_REQ_GET_NESTED_STATE_PAGES.
>
> > KVM_SET_NESTED_STATE in VMX, while in SVM implementation, it is simply
> > just a kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
>
> svm_set_nested_state() very rougly open codes enter_svm_guest_mode().  VMX could
> do the same, but that may or may not be a net positive.
>
> > hmm... so is the nested_vmx_enter_non_root_mode() call in vmx
> > KVM_SET_NESTED_STATE ioctl() still necessary? I am thinking that
> > because the same function is called again in nested_vmx_run().
>
> nested_vmx_run() is used only to emulate VMLAUNCH/VMRESUME and wont' be invoked
> if the vCPU is already running L2 at the time of migration.

Ack.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ