lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 10 Oct 2018 18:53:59 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org
Cc:     Radim Krčmář <rkrcmar@...hat.com>,
        Jim Mattson <jmattson@...gle.com>,
        Liran Alon <liran.alon@...cle.com>,
        Sean Christopherson <sean.j.christopherson@...el.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 0/9] x86/kvm/nVMX: optimize MMU switch between L1 and
 L2

On 08/10/2018 21:28, Vitaly Kuznetsov wrote:
> Change since v3 [Sean Christopherson]:
> - Add Reviewed-by tags (thanks!).
> - Drop stale role initializer in kvm_calc_shadow_ept_root_page_role
>   (interim change in PATCH4, the end result is the same).
> - Use '!!' instead of '!= 0' for kvm_read_cr4_bits() readings.
> 
> Also, rebased to the current kvm/queue.
> 
> Original description:
> 
> Currently, when we switch from L1 to L2 (VMX) we do the following:
> - Re-initialize L1 MMU as shadow EPT MMU (nested_ept_init_mmu_context())
> - Re-initialize 'nested' MMU (nested_vmx_load_cr3() -> init_kvm_nested_mmu())
> 
> When we switch back we do:
> - Re-initialize L1 MMU (nested_vmx_load_cr3() -> init_kvm_tdp_mmu())
> 
> This seems to be sub-optimal. Initializing MMU is expensive (thanks to
> update_permission_bitmask(), update_pkru_bitmask(),..) Try solving the
> issue by splitting L1-normal and L1-nested MMUs and checking if MMU reset
> is really needed. This spares us about 1000 cpu cycles on nested vmexit.
> 
> Brief look at SVM makes me think it can be optimized the exact same way,
> I'll do this in a separate series.
> 
> Paolo Bonzini (1):
>   x86/kvm/mmu: get rid of redundant kvm_mmu_setup()
> 
> Vitaly Kuznetsov (8):
>   x86/kvm/mmu: make vcpu->mmu a pointer to the current MMU
>   x86/kvm/mmu.c: set get_pdptr hook in kvm_init_shadow_ept_mmu()
>   x86/kvm/mmu.c: add kvm_mmu parameter to kvm_mmu_free_roots()
>   x86/kvm/mmu: introduce guest_mmu
>   x86/kvm/mmu: make space for source data caching in struct kvm_mmu
>   x86/kvm/nVMX: introduce source data cache for
>     kvm_init_shadow_ept_mmu()
>   x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed
>   x86/kvm/mmu: check if MMU reconfiguration is needed in
>     init_kvm_nested_mmu()
> 
>  arch/x86/include/asm/kvm_host.h |  44 +++-
>  arch/x86/kvm/mmu.c              | 357 +++++++++++++++++++-------------
>  arch/x86/kvm/mmu.h              |   8 +-
>  arch/x86/kvm/mmu_audit.c        |  12 +-
>  arch/x86/kvm/paging_tmpl.h      |  15 +-
>  arch/x86/kvm/svm.c              |  14 +-
>  arch/x86/kvm/vmx.c              |  55 +++--
>  arch/x86/kvm/x86.c              |  22 +-
>  8 files changed, 322 insertions(+), 205 deletions(-)
> 

Queued, thanks.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ