lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 8 Mar 2022 19:35:25 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        dmatlack@...gle.com
Subject: Re: [PATCH v2 19/25] KVM: x86/mmu: simplify and/or inline
 computation of shadow MMU roles

On Mon, Feb 21, 2022, Paolo Bonzini wrote:
> @@ -4822,18 +4798,23 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
>  {
>  	struct kvm_mmu *context = &vcpu->arch.root_mmu;
>  	union kvm_mmu_paging_mode cpu_mode = kvm_calc_cpu_mode(vcpu, regs);
> -	union kvm_mmu_page_role root_role =
> -		kvm_calc_shadow_mmu_root_page_role(vcpu, cpu_mode);
> +	union kvm_mmu_page_role root_role;
>  
> -	shadow_mmu_init_context(vcpu, context, cpu_mode, root_role);
> -}
> +	root_role = cpu_mode.base;
> +	root_role.level = max_t(u32, root_role.level, PT32E_ROOT_LEVEL);

Heh, we have different definitions of "simpler".   Can we split the difference
and do?

	/* KVM uses PAE paging whenever the guest isn't using 64-bit paging. */
	if (!____is_efer_lma(regs))
		root_role.level = PT32E_ROOT_LEVEL;

> -static union kvm_mmu_page_role
> -kvm_calc_shadow_npt_root_page_role(struct kvm_vcpu *vcpu,
> -				   union kvm_mmu_paging_mode role)
> -{
> -	role.base.level = kvm_mmu_get_tdp_level(vcpu);
> -	return role.base;
> +	/*
> +	 * KVM forces EFER.NX=1 when TDP is disabled, reflect it in the MMU role.
> +	 * KVM uses NX when TDP is disabled to handle a variety of scenarios,
> +	 * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
> +	 * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
> +	 * The iTLB multi-hit workaround can be toggled at any time, so assume
> +	 * NX can be used by any non-nested shadow MMU to avoid having to reset
> +	 * MMU contexts.
> +	 */
> +	root_role.efer_nx = true;
> +
> +	shadow_mmu_init_context(vcpu, context, cpu_mode, root_role);
>  }
>  
>  void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
> @@ -4846,7 +4827,10 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
>  		.efer = efer,
>  	};
>  	union kvm_mmu_paging_mode cpu_mode = kvm_calc_cpu_mode(vcpu, &regs);
> -	union kvm_mmu_page_role root_role = kvm_calc_shadow_npt_root_page_role(vcpu, cpu_mode);
> +	union kvm_mmu_page_role root_role;
> +
> +	root_role = cpu_mode.base;
> +	root_role.level = kvm_mmu_get_tdp_level(vcpu);

Regarding the WARN_ON_ONCE(root_role.direct) discussed for a different patch, how
about this for a WARN + comment?

	/* NPT requires CR0.PG=1, thus the MMU is guaranteed to be indirect. */
	WARN_ON_ONCE(root_role.direct);

>  	shadow_mmu_init_context(vcpu, context, cpu_mode, root_role);
>  	kvm_mmu_new_pgd(vcpu, nested_cr3);
> -- 
> 2.31.1
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ