lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c92f44f-3e56-5a5d-76c2-b50b8fe58b3d@redhat.com>
Date:   Thu, 12 May 2022 18:09:49 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 16/22] KVM: x86/mmu: remove redundant bits from extended
 role

On 5/12/22 16:18, Sean Christopherson wrote:
> On Thu, May 12, 2022, Paolo Bonzini wrote:
>> On 5/10/22 02:20, Sean Christopherson wrote:
>>> --
>>> From: Sean Christopherson<seanjc@...gle.com>
>>> Date: Mon, 9 May 2022 17:13:39 -0700
>>> Subject: [PATCH] KVM: x86/mmu: Return true from is_cr4_pae() iff CR0.PG is set
>>>
>>> Condition is_cr4_pae() on is_cr0_pg() in addition to the !4-byte gPTE
>>> check.  From the MMU's perspective, PAE is disabling if paging is
>>> disabled.  The current code works because all callers check is_cr0_pg()
>>> before invoking is_cr4_pae(), but relying on callers to maintain that
>>> behavior is unnecessarily risky.
>>>
>>> Fixes: faf729621c96 ("KVM: x86/mmu: remove redundant bits from extended role")
>>> Signed-off-by: Sean Christopherson<seanjc@...gle.com>
>>> ---
>>>    arch/x86/kvm/mmu/mmu.c | 2 +-
>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>> index 909372762363..d1c20170a553 100644
>>> --- a/arch/x86/kvm/mmu/mmu.c
>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>> @@ -240,7 +240,7 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu)
>>>
>>>    static inline bool is_cr4_pae(struct kvm_mmu *mmu)
>>>    {
>>> -        return !mmu->cpu_role.base.has_4_byte_gpte;
>>> +        return is_cr0_pg(mmu) && !mmu->cpu_role.base.has_4_byte_gpte;
>>>    }
>>>
>>>    static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu)
>>
>> Hmm, thinking more about it this is not needed for two kind of opposite
>> reasons:
>>
>> * if is_cr4_pae() really were to represent the raw CR4.PAE value, this is
>> incorrect and it should be up to the callers to check is_cr0_pg()
>>
>> * if is_cr4_pae() instead represents 8-byte page table entries, then it does
>> even before this patch, because of the following logic in
>> kvm_calc_cpu_role():
>>
>>          if (!____is_cr0_pg(regs)) {
>>                  role.base.direct = 1;
>>                  return role;
>>          }
>> 	...
>>          role.base.has_4_byte_gpte = !____is_cr4_pae(regs);
>>
>>
>> So whatever meaning we give to is_cr4_pae(), there is no need for the
>> adjustment.
> 
> I disagree, because is_cr4_pae() doesn't represent either of those things.  It
> represents the effective (not raw) CR4.PAE from the MMU's perspective.

Doh, you're right that has_4_byte_gpte is actually 0 if CR0.PG=0. 
Swapping stuff back is hard.

What do you think about a WARN_ON_ONCE(!is_cr0_pg(mmu))?

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ