lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 10 Feb 2022 00:30:08 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     David Matlack <dmatlack@...gle.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, vkuznets@...hat.com
Subject: Re: [PATCH 01/23] KVM: MMU: pass uses_nx directly to
 reset_shadow_zero_bits_mask

On Fri, Feb 04, 2022, David Matlack wrote:
> On Fri, Feb 04, 2022 at 06:56:56AM -0500, Paolo Bonzini wrote:
> > reset_shadow_zero_bits_mask has a very unintuitive way of deciding
> > whether the shadow pages will use the NX bit.  The function is used in
> > two cases, shadow paging and shadow NPT; shadow paging has a use for
> > EFER.NX and needs to force it enabled, while shadow NPT only needs it
> > depending on L1's setting.
> > 
> > The actual root problem here is that is_efer_nx, despite being part
> > of the "base" role, only matches the format of the shadow pages in the
> > NPT case.  For now, just remove the ugly variable initialization and move
> > the call to reset_shadow_zero_bits_mask out of shadow_mmu_init_context.
> > The parameter can then be removed after the root problem in the role
> > is fixed.
> > 
> > Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> 
> Reviewed-by: David Matlack <dmatlack@...gle.com>
> 
> (I agree this commit makes no functional change.)

There may not be a functional change, but it drops an optimization and contributes
to making future code/patches more fragile due to making it harder to understand
the relationship between shadow_mmu_init_context() and __kvm_mmu_new_pgd().

> > @@ -4829,8 +4820,6 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte
> >  
> >  	reset_guest_paging_metadata(vcpu, context);
> >  	context->shadow_root_level = new_role.base.level;
> > -
> > -	reset_shadow_zero_bits_mask(vcpu, context);

This is guarded by:

	if (new_role.as_u64 == context->mmu_role.as_u64)
		return;

> >  }
> >  
> >  static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
> > @@ -4841,6 +4830,16 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu,
> >  		kvm_calc_shadow_mmu_root_page_role(vcpu, regs, false);
> >  
> >  	shadow_mmu_init_context(vcpu, context, regs, new_role);
> > +
> > +	/*
> > +	 * KVM uses NX when TDP is disabled to handle a variety of scenarios,
> > +	 * notably for huge SPTEs if iTLB multi-hit mitigation is enabled and
> > +	 * to generate correct permissions for CR0.WP=0/CR4.SMEP=1/EFER.NX=0.
> > +	 * The iTLB multi-hit workaround can be toggled at any time, so assume
> > +	 * NX can be used by any non-nested shadow MMU to avoid having to reset
> > +	 * MMU contexts.  Note, KVM forces EFER.NX=1 when TDP is disabled.
> > +	 */
> > +	reset_shadow_zero_bits_mask(vcpu, context, true);

Whereas this will compute the mask even if the role doesn't change.  I say that
matters later on because then this sequence:

	shadow_mmu_init_context(vcpu, context, &regs, new_role);
	reset_shadow_zero_bits_mask(vcpu, context, is_efer_nx(context));
	__kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base);

becomes even more difficult to untangle.

And looking at where this series ends up, I don't understand the purpose of this
change.  Patch 18 essentially reverts this patch, and I see nothing in between
that will break without the temporary change.   That patch becomes:

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 02e6d256805d..f9c96de1189d 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4408,7 +4408,7 @@ static void reset_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
         * NX can be used by any non-nested shadow MMU to avoid having to reset
         * MMU contexts.  Note, KVM forces EFER.NX=1 when TDP is disabled.
         */
-       bool uses_nx = is_efer_nx(context) || !tdp_enabled;
+       bool uses_nx = context->mmu_role.efer_nx;

        /* @amd adds a check on bit of SPTEs, which KVM shouldn't use anyways. */
        bool is_amd = true;

though it needs to update the comment as well.

> >  }
> >  
> >  static union kvm_mmu_role
> > @@ -4872,6 +4871,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
> >  	__kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base);
> >  
> >  	shadow_mmu_init_context(vcpu, context, &regs, new_role);
> > +	reset_shadow_zero_bits_mask(vcpu, context, is_efer_nx(context));
> 
> Out of curiousity, how does KVM mitigate iTLB multi-hit when shadowing
> NPT and the guest has not enabled EFER.NX?
> 
> >  }
> >  EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
> >  
> > -- 
> > 2.31.1
> > 
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ