[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200714015732.32426-1-sean.j.christopherson@intel.com>
Date: Mon, 13 Jul 2020 18:57:32 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Oliver Upton <oupton@...gle.com>,
Peter Shier <pshier@...gle.com>
Subject: [PATCH] KVM: x86: Don't attempt to load PDPTRs when 64-bit mode is enabled
Don't attempt to load PDPTRs if EFER.LME=1, i.e. if 64-bit mode is
enabled. A recent change to reload the PDTPRs when CR0.CD or CR0.NW is
toggled botched the EFER.LME handling and sends KVM down the PDTPR path
when is_paging() is true, i.e. when the guest toggles CD/NW in 64-bit
mode.
Split the CR0 checks for 64-bit vs. 32-bit PAE into separate paths. The
64-bit path is specifically checking state when paging is toggled on,
i.e. CR0.PG transititions from 0->1. The PDPTR path now needs to run if
the new CR0 state has paging enabled, irrespective of whether paging was
already enabled. Trying to shave a few cycles to make the PDPTR path an
"else if" case is a mess.
Fixes: d42e3fae6faed ("kvm: x86: Read PDPTEs on CR0.CD and CR0.NW changes")
Cc: Jim Mattson <jmattson@...gle.com>
Cc: Oliver Upton <oupton@...gle.com>
Cc: Peter Shier <pshier@...gle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
---
The other way to fix this, with a much smaller diff stat, is to simply
move the !is_page(vcpu) check inside (vcpu->arch.efer & EFER_LME). But
that results in a ridiculous amount of nested conditionals for what is a
very straightforward check e.g.
if (cr0 & X86_CR0_PG) {
if (vcpu->arch.efer & EFER_LME) }
if (!is_paging(vcpu)) {
...
}
}
}
Since this doesn't need to be backported anywhere, I didn't see any value
in having an intermediate step.
arch/x86/kvm/x86.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 95ef629228691..5f526d94c33f3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -819,22 +819,22 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE))
return 1;
- if (cr0 & X86_CR0_PG) {
#ifdef CONFIG_X86_64
- if (!is_paging(vcpu) && (vcpu->arch.efer & EFER_LME)) {
- int cs_db, cs_l;
+ if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) &&
+ (cr0 & X86_CR0_PG)) {
+ int cs_db, cs_l;
- if (!is_pae(vcpu))
- return 1;
- kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
- if (cs_l)
- return 1;
- } else
-#endif
- if (is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) &&
- !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)))
+ if (!is_pae(vcpu))
+ return 1;
+ kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
+ if (cs_l)
return 1;
}
+#endif
+ if (!(vcpu->arch.efer & EFER_LME) && (cr0 & X86_CR0_PG) &&
+ is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) &&
+ !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)))
+ return 1;
if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
return 1;
--
2.26.0
Powered by blists - more mailing lists