lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Feb 2019 18:17:47 +0100
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org
Cc:     Radim Krčmář <rkrcmar@...hat.com>,
        Junaid Shahid <junaids@...gle.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/kvm/mmu: make mmu->prev_roots cache work for NPT case

On 22/02/19 17:46, Vitaly Kuznetsov wrote:
> I noticed that fast_cr3_switch() always fails when we switch back from L2
> to L1 as it is not able to find a cached root. This is odd: host's CR3
> usually stays the same, we expect to always follow the fast path. Turns
> out the problem is that page role is always mismatched because
> kvm_mmu_get_page() filters out cr4_pae when direct, the value is stored
> in page header and later compared with new_role in cached_root_available().
> As cr4_pae is always set in long mode prev_roots cache is dysfunctional.

Really cr4_pae means "are the PTEs 8 bytes".  So I think your patch is
correct but on top we should set it to 1 (not zero!!) for
kvm_calc_shadow_ept_root_page_role, init_kvm_nested_mmu and
kvm_calc_tdp_mmu_root_page_role.  Or maybe everything breaks with that
change.

> - Do not clear cr4_pae in kvm_mmu_get_page() and check direct on call sites
>  (detect_write_misaligned(), get_written_sptes()).

These only run with shadow page tables, by the way.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ