[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ynmv2X5eLz2OQDMB@google.com>
Date: Tue, 10 May 2022 00:20:41 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 16/22] KVM: x86/mmu: remove redundant bits from extended
role
On Thu, Apr 14, 2022, Paolo Bonzini wrote:
> Before the separation of the CPU and the MMU role, CR0.PG was not
> available in the base MMU role, because two-dimensional paging always
> used direct=1 in the MMU role. However, now that the raw role is
> snapshotted in mmu->cpu_role, CR0.PG *can* be found (though inverted)
> as !cpu_role.base.direct. There is no need to store it again in union
> kvm_mmu_extended_role; instead, write an is_cr0_pg accessor by hand that
> takes care of the inversion.
>
> Likewise, CR4.PAE is now always present in the CPU role as
> !cpu_role.base.has_4_byte_gpte. The inversion makes certain tests on
> the MMU role easier, and is easily hidden by the is_cr4_pae accessor
> when operating on the CPU role.
>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
> arch/x86/include/asm/kvm_host.h | 2 --
> arch/x86/kvm/mmu/mmu.c | 14 ++++++++++----
> 2 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6bc5550ae530..52ceeadbed28 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -367,8 +367,6 @@ union kvm_mmu_extended_role {
> struct {
> unsigned int valid:1;
> unsigned int execonly:1;
> - unsigned int cr0_pg:1;
> - unsigned int cr4_pae:1;
> unsigned int cr4_pse:1;
> unsigned int cr4_pke:1;
> unsigned int cr4_smap:1;
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 483a3761db81..cf8a41675a79 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -224,16 +224,24 @@ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \
> { \
> return !!(mmu->cpu_role. base_or_ext . reg##_##name); \
> }
> -BUILD_MMU_ROLE_ACCESSOR(ext, cr0, pg);
> BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp);
> BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse);
> -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pae);
> BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep);
> BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap);
> BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke);
> BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57);
> BUILD_MMU_ROLE_ACCESSOR(base, efer, nx);
>
> +static inline bool is_cr0_pg(struct kvm_mmu *mmu)
> +{
> + return !mmu->cpu_role.base.direct;
> +}
> +
> +static inline bool is_cr4_pae(struct kvm_mmu *mmu)
> +{
> + return !mmu->cpu_role.base.has_4_byte_gpte;
If it's not too late for fixup, this should be:
return is_cr0_pg(mmu) && !mmu->cpu_role.base.has_4_byte_gpte;
because has_4_byte_gpte will also be false when paging is disabled. The current
code works because the only users check is_cr0_pg() before hand, but IMO this is
unnecessarily dangerous to leave lying around (and the previous code set cr4_pae
iff cr0_pg=1).
If it's too late for fixup...
--
From: Sean Christopherson <seanjc@...gle.com>
Date: Mon, 9 May 2022 17:13:39 -0700
Subject: [PATCH] KVM: x86/mmu: Return true from is_cr4_pae() iff CR0.PG is set
Condition is_cr4_pae() on is_cr0_pg() in addition to the !4-byte gPTE
check. From the MMU's perspective, PAE is disabling if paging is
disabled. The current code works because all callers check is_cr0_pg()
before invoking is_cr4_pae(), but relying on callers to maintain that
behavior is unnecessarily risky.
Fixes: faf729621c96 ("KVM: x86/mmu: remove redundant bits from extended role")
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 909372762363..d1c20170a553 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -240,7 +240,7 @@ static inline bool is_cr0_pg(struct kvm_mmu *mmu)
static inline bool is_cr4_pae(struct kvm_mmu *mmu)
{
- return !mmu->cpu_role.base.has_4_byte_gpte;
+ return is_cr0_pg(mmu) && !mmu->cpu_role.base.has_4_byte_gpte;
}
static struct kvm_mmu_role_regs vcpu_to_role_regs(struct kvm_vcpu *vcpu)
base-commit: 2764011106d0436cb44702cfb0981339d68c3509
--
Powered by blists - more mailing lists