[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <166f2025-1479-a5d0-9d0d-da158b01fa95@redhat.com>
Date: Thu, 14 Apr 2022 10:27:11 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: seanjc@...gle.com
Subject: Re: [PATCH 16/22] KVM: x86/mmu: remove redundant bits from extended
role
On 4/14/22 09:39, Paolo Bonzini wrote:
> Before the separation of the CPU and the MMU role, CR0.PG was not
> available in the base MMU role, because two-dimensional paging always
> used direct=1 in the MMU role. However, now that the raw role is
> snapshotted in mmu->cpu_role, CR0.PG *can* be found (though inverted)
> as !cpu_role.base.direct. There is no need to store it again in union
> kvm_mmu_extended_role; instead, write an is_cr0_pg accessor by hand that
> takes care of the inversion.
>
> Likewise, CR4.PAE is now always present in the CPU role as
> !cpu_role.base.has_4_byte_gpte. The inversion makes certain tests on
> the MMU role easier, and is easily hidden by the is_cr4_pae accessor
> when operating on the CPU role.
>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Better:
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index cf8a41675a79..2a9b589192c3 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -234,7 +234,7 @@ BUILD_MMU_ROLE_ACCESSOR(base, efer, nx);
static inline bool is_cr0_pg(struct kvm_mmu *mmu)
{
- return !mmu->cpu_role.base.direct;
+ return mmu->cpu_role.base.level > 0;
}
static inline bool is_cr4_pae(struct kvm_mmu *mmu)
given that the future of the direct bit is unclear.
Paolo
Powered by blists - more mailing lists