[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALMp9eR94d9Xbt7ZTiaezL3hSuTQTCNX8pxiDFE9tHCpDRjrQg@mail.gmail.com>
Date: Thu, 6 Aug 2020 14:32:33 -0700
From: Jim Mattson <jmattson@...gle.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>, kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Oliver Upton <oupton@...gle.com>,
Peter Shier <pshier@...gle.com>,
Maxim Levitsky <mlevitsk@...hat.com>
Subject: Re: [PATCH] KVM: x86: Don't attempt to load PDPTRs when 64-bit mode
is enabled
On Wed, Aug 5, 2020 at 12:04 AM Maxim Levitsky <mlevitsk@...hat.com> wrote:
>
> On Mon, 2020-07-13 at 18:57 -0700, Sean Christopherson wrote:
> > Don't attempt to load PDPTRs if EFER.LME=1, i.e. if 64-bit mode is
> > enabled. A recent change to reload the PDTPRs when CR0.CD or CR0.NW is
> > toggled botched the EFER.LME handling and sends KVM down the PDTPR path
> > when is_paging() is true, i.e. when the guest toggles CD/NW in 64-bit
> > mode.
> >
> > Split the CR0 checks for 64-bit vs. 32-bit PAE into separate paths. The
> > 64-bit path is specifically checking state when paging is toggled on,
> > i.e. CR0.PG transititions from 0->1. The PDPTR path now needs to run if
> > the new CR0 state has paging enabled, irrespective of whether paging was
> > already enabled. Trying to shave a few cycles to make the PDPTR path an
> > "else if" case is a mess.
> >
> > Fixes: d42e3fae6faed ("kvm: x86: Read PDPTEs on CR0.CD and CR0.NW changes")
> > Cc: Jim Mattson <jmattson@...gle.com>
> > Cc: Oliver Upton <oupton@...gle.com>
> > Cc: Peter Shier <pshier@...gle.com>
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> > ---
> >
> > The other way to fix this, with a much smaller diff stat, is to simply
> > move the !is_page(vcpu) check inside (vcpu->arch.efer & EFER_LME). But
> > that results in a ridiculous amount of nested conditionals for what is a
> > very straightforward check e.g.
> >
> > if (cr0 & X86_CR0_PG) {
> > if (vcpu->arch.efer & EFER_LME) }
> > if (!is_paging(vcpu)) {
> > ...
> > }
> > }
> > }
> >
> > Since this doesn't need to be backported anywhere, I didn't see any value
> > in having an intermediate step.
> >
> > arch/x86/kvm/x86.c | 24 ++++++++++++------------
> > 1 file changed, 12 insertions(+), 12 deletions(-)
> >
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 95ef629228691..5f526d94c33f3 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -819,22 +819,22 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
> > if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE))
> > return 1;
> >
> > - if (cr0 & X86_CR0_PG) {
> > #ifdef CONFIG_X86_64
> > - if (!is_paging(vcpu) && (vcpu->arch.efer & EFER_LME)) {
> > - int cs_db, cs_l;
> > + if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) &&
> > + (cr0 & X86_CR0_PG)) {
> > + int cs_db, cs_l;
> >
> > - if (!is_pae(vcpu))
> > - return 1;
> > - kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
> > - if (cs_l)
> > - return 1;
> > - } else
> > -#endif
> > - if (is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) &&
> > - !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)))
> > + if (!is_pae(vcpu))
> > + return 1;
> > + kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
> > + if (cs_l)
> > return 1;
> > }
> > +#endif
> > + if (!(vcpu->arch.efer & EFER_LME) && (cr0 & X86_CR0_PG) &&
> > + is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) &&
> > + !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)))
> > + return 1;
It might be worth commenting on the subtlety of the test below being
skipped if the PDPTEs were loaded above. I'm assuming that the PDPTEs
shouldn't be loaded if the instruction faults.
> > if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE))
> > return 1;
>
> I also investigated this issue (also same thing, OVMF doesn't boot),
> and after looking at the intel and amd's PRM, this looks like correct solution.
> I also tested this and it works.
>
>
> Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
>
> Best regards,
> Maxim Levitsky
>
Powered by blists - more mailing lists