lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ftrylqwm.fsf@vitty.brq.redhat.com>
Date:   Thu, 07 Mar 2019 15:07:05 +0100
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>
Cc:     kvm@...r.kernel.org, Junaid Shahid <junaids@...gle.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 RFC] x86/kvm/mmu: make mmu->prev_roots cache work for NPT case

Vitaly Kuznetsov <vkuznets@...hat.com> writes:

> Alternative patch: remove the filtering from kvm_mmu_get_page() and check
> for direct on call sites. cr4_pae setting in kvm_calc_mmu_role_common()
> can be preserved for consistency.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  arch/x86/kvm/mmu.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index f2d1d230d5b8..7fb8118f2af6 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2420,8 +2420,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>  	role = vcpu->arch.mmu->mmu_role.base;
>  	role.level = level;
>  	role.direct = direct;
> -	if (role.direct)
> -		role.cr4_pae = 0;
>  	role.access = access;
>  	if (!vcpu->arch.mmu->direct_map
>  	    && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
> @@ -5176,7 +5174,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa,
>  		 gpa, bytes, sp->role.word);
>  
>  	offset = offset_in_page(gpa);
> -	pte_size = sp->role.cr4_pae ? 8 : 4;
> +	pte_size = (sp->role.cr4_pae && !sp->role.direct) ? 8 : 4;
>  
>  	/*
>  	 * Sometimes, the OS only writes the last one bytes to update status
> @@ -5200,7 +5198,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
>  	page_offset = offset_in_page(gpa);
>  	level = sp->role.level;
>  	*nspte = 1;
> -	if (!sp->role.cr4_pae) {
> +	if (!sp->role.cr4_pae || sp->role.direct) {
>  		page_offset <<= 1;	/* 32->64 */
>  		/*
>  		 * A 32-bit pde maps 4MB while the shadow pdes map

While I personally prefer this approach to not setting role.cr4_pae in
kvm_calc_mmu_role_common() I'd like to get maintainers opinion (I did
test the patch with ept=off but I have to admit I don't know much about
shadow page tables which actually use detect_write_misaligned()/
get_written_sptes())

Paolo, Radim, (anyone else) - any thoughts?

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ