[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190307160633.GA4986@linux.intel.com>
Date: Thu, 7 Mar 2019 08:06:33 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
kvm@...r.kernel.org, Junaid Shahid <junaids@...gle.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 RFC] x86/kvm/mmu: make mmu->prev_roots cache work for
NPT case
On Thu, Mar 07, 2019 at 03:07:05PM +0100, Vitaly Kuznetsov wrote:
> Vitaly Kuznetsov <vkuznets@...hat.com> writes:
>
> > Alternative patch: remove the filtering from kvm_mmu_get_page() and check
> > for direct on call sites. cr4_pae setting in kvm_calc_mmu_role_common()
> > can be preserved for consistency.
> >
> > Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> > ---
> > arch/x86/kvm/mmu.c | 6 ++----
> > 1 file changed, 2 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index f2d1d230d5b8..7fb8118f2af6 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -2420,8 +2420,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
> > role = vcpu->arch.mmu->mmu_role.base;
> > role.level = level;
> > role.direct = direct;
> > - if (role.direct)
> > - role.cr4_pae = 0;
> > role.access = access;
> > if (!vcpu->arch.mmu->direct_map
> > && vcpu->arch.mmu->root_level <= PT32_ROOT_LEVEL) {
> > @@ -5176,7 +5174,7 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa,
> > gpa, bytes, sp->role.word);
> >
> > offset = offset_in_page(gpa);
> > - pte_size = sp->role.cr4_pae ? 8 : 4;
> > + pte_size = (sp->role.cr4_pae && !sp->role.direct) ? 8 : 4;
> >
> > /*
> > * Sometimes, the OS only writes the last one bytes to update status
> > @@ -5200,7 +5198,7 @@ static u64 *get_written_sptes(struct kvm_mmu_page *sp, gpa_t gpa, int *nspte)
> > page_offset = offset_in_page(gpa);
> > level = sp->role.level;
> > *nspte = 1;
> > - if (!sp->role.cr4_pae) {
> > + if (!sp->role.cr4_pae || sp->role.direct) {
> > page_offset <<= 1; /* 32->64 */
> > /*
> > * A 32-bit pde maps 4MB while the shadow pdes map
>
> While I personally prefer this approach to not setting role.cr4_pae in
> kvm_calc_mmu_role_common() I'd like to get maintainers opinion (I did
> test the patch with ept=off but I have to admit I don't know much about
> shadow page tables which actually use detect_write_misaligned()/
> get_written_sptes())
>
> Paolo, Radim, (anyone else) - any thoughts?
The changes to detect_write_misaligned() and get_written_sptes() are
wrong/unnecessary as those functions should only be called for indirect
shadow PTEs, e.g.:
for_each_gfn_indirect_valid_sp(vcpu->kvm, sp, gfn) {
if (detect_write_misaligned(sp, gpa, bytes) ||
detect_write_flooding(sp)) {
kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
++vcpu->kvm->stat.mmu_flooded;
continue;
}
spte = get_written_sptes(sp, gpa, &npte);
if (!spte)
continue;
...
}
If anything, they could WARN_ON(sp->role.direct), but IMO that's overkill
since they're static helpers with a single call site (above).
I'm missing the background for this patch, why does clearing role.cr4_pae
for direct SPTEs cause problems with the prev_roots cache?
Powered by blists - more mailing lists