lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 12 May 2022 21:34:04 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 16/22] KVM: x86/mmu: remove redundant bits from extended
 role

On Thu, May 12, 2022, Paolo Bonzini wrote:
> On 5/12/22 16:18, Sean Christopherson wrote:
> > On Thu, May 12, 2022, Paolo Bonzini wrote:
> > > On 5/10/22 02:20, Sean Christopherson wrote:
> > > > --
> > > > From: Sean Christopherson<seanjc@...gle.com>
> > > > Date: Mon, 9 May 2022 17:13:39 -0700
> > > > Subject: [PATCH] KVM: x86/mmu: Return true from is_cr4_pae() iff CR0.PG is set
> > > > 
> > > > Condition is_cr4_pae() on is_cr0_pg() in addition to the !4-byte gPTE
> > > > check.  From the MMU's perspective, PAE is disabling if paging is
> > > > disabled.  The current code works because all callers check is_cr0_pg()
> > > > before invoking is_cr4_pae(), but relying on callers to maintain that
> > > > behavior is unnecessarily risky.
> > > > 
> > > > Fixes: faf729621c96 ("KVM: x86/mmu: remove redundant bits from extended role")
> > > > Signed-off-by: Sean Christopherson<seanjc@...gle.com>
> > > > ---
> > > >    arch/x86/kvm/mmu/mmu.c | 2 +-
> > > >    1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > > > index 909372762363..d1c20170a553 100644
> > > > --- a/arch/x86/kvm/mmu/mmu.c
> > > > +++ b/arch/x86/kvm/mmu/mmu.c
> > > > @@ -240,7 +240,7 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu)
> > > > 
> > > >    static inline bool is_cr4_pae(struct kvm_mmu *mmu)
> > > >    {
> > > > -        return !mmu->cpu_role.base.has_4_byte_gpte;
> > > > +        return is_cr0_pg(mmu) && !mmu->cpu_role.base.has_4_byte_gpte;
> > > >    }
> > > > 
> > > >    static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu)
> > > 
> > > Hmm, thinking more about it this is not needed for two kind of opposite
> > > reasons:
> > > 
> > > * if is_cr4_pae() really were to represent the raw CR4.PAE value, this is
> > > incorrect and it should be up to the callers to check is_cr0_pg()
> > > 
> > > * if is_cr4_pae() instead represents 8-byte page table entries, then it does
> > > even before this patch, because of the following logic in
> > > kvm_calc_cpu_role():
> > > 
> > >          if (!____is_cr0_pg(regs)) {
> > >                  role.base.direct = 1;
> > >                  return role;
> > >          }
> > > 	...
> > >          role.base.has_4_byte_gpte = !____is_cr4_pae(regs);
> > > 
> > > 
> > > So whatever meaning we give to is_cr4_pae(), there is no need for the
> > > adjustment.
> > 
> > I disagree, because is_cr4_pae() doesn't represent either of those things.  It
> > represents the effective (not raw) CR4.PAE from the MMU's perspective.
> 
> Doh, you're right that has_4_byte_gpte is actually 0 if CR0.PG=0. Swapping
> stuff back is hard.
> 
> What do you think about a WARN_ON_ONCE(!is_cr0_pg(mmu))?

Why bother?  WARN and continue would be rather silly as we'd knowingly let KVM
do something wrong for no benefit.  And this

	return !WARN_ON_ONCE(!is_cr0_pg(mmu)) && !role.base.has_4_byte_gpte;

feels wrong because there's nothing fundamentally broke with calling is_cr4_pae()
without first checking CR0.PG.

If you really want to avoid the is_cr0_pg() check, why not just use has_4_byte_gpte
directly?  Logically I think that's easy enough to follow, e.g. 64 bits == 8 bytes,
32 bits == 4 bytes.  We can always revisit the need for is_cr4_pae() if the MMU
needs to identify PAE paging for some reason, e.g. for PDPTR awareness.

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 909372762363..b05190027e20 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -238,11 +238,6 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu)
         return mmu->cpu_role.base.level > 0;
 }

-static inline bool is_cr4_pae(struct kvm_mmu *mmu)
-{
-        return !mmu->cpu_role.base.has_4_byte_gpte;
-}
-
 static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu)
 {
        struct kvm_mmu_role_regs regs = {
@@ -4855,7 +4850,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu,

        if (!is_cr0_pg(context))
                context->gva_to_gpa = nonpaging_gva_to_gpa;
-       else if (is_cr4_pae(context))
+       else if (!context->cpu_role.base.has_4_byte_gpte)
                context->gva_to_gpa = paging64_gva_to_gpa;
        else
                context->gva_to_gpa = paging32_gva_to_gpa;
@@ -4877,7 +4872,7 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte

        if (!is_cr0_pg(context))
                nonpaging_init_context(context);
-       else if (is_cr4_pae(context))
+       else if (!context->cpu_role.base.has_4_byte_gpte)
                paging64_init_context(context);
        else
                paging32_init_context(context);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ