[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAOLK0pyMbKuxA0KnsRz_92jr5j15YXb2yUP0GH-xE06BqO_zTA@mail.gmail.com>
Date: Mon, 7 Jan 2019 13:13:59 +0800
From: Tianyu Lan <lantianyu1986@...il.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Lan Tianyu <Tianyu.Lan@...rosoft.com>, christoffer.dall@....com,
marc.zyngier@....com, linux@...linux.org.uk,
catalin.marinas@....com, will.deacon@....com, jhogan@...nel.org,
ralf@...ux-mips.org, paul.burton@...s.com, paulus@...abs.org,
benh@...nel.crashing.org, mpe@...erman.id.au,
Paolo Bonzini <pbonzini@...hat.com>,
Radim Krcmar <rkrcmar@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, bp@...en8.de,
"H. Peter Anvin" <hpa@...or.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
"linux-kernel@...r kernel org" <linux-kernel@...r.kernel.org>,
linux-mips@...r.kernel.org, kvm-ppc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, kvm <kvm@...r.kernel.org>,
michael.h.kelley@...rosoft.com, kys@...rosoft.com,
vkuznets@...hat.com
Subject: Re: [PATCH 6/11] KVM/MMU: Flush tlb with range list in sync_page()
On Sat, Jan 5, 2019 at 12:30 AM Sean Christopherson
<sean.j.christopherson@...el.com> wrote:
>
> On Fri, Jan 04, 2019 at 04:54:00PM +0800, lantianyu1986@...il.com wrote:
> > From: Lan Tianyu <Tianyu.Lan@...rosoft.com>
> >
> > This patch is to flush tlb via flush list function.
>
> More explanation of why this is beneficial would be nice. Without the
> context of the overall series it's not immediately obvious what
> kvm_flush_remote_tlbs_with_list() does without a bit of digging.
>
> >
> > Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
> > ---
> > arch/x86/kvm/paging_tmpl.h | 16 ++++++++++++++--
> > 1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 833e8855bbc9..866ccdea762e 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> > bool host_writable;
> > gpa_t first_pte_gpa;
> > int set_spte_ret = 0;
> > + LIST_HEAD(flush_list);
> >
> > /* direct kvm_mmu_page can not be unsync. */
> > BUG_ON(sp->role.direct);
> > @@ -980,6 +981,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> > first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
> >
> > for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
> > + int tmp_spte_ret = 0;
> > unsigned pte_access;
> > pt_element_t gpte;
> > gpa_t pte_gpa;
> > @@ -1029,14 +1031,24 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> >
> > host_writable = sp->spt[i] & SPTE_HOST_WRITEABLE;
> >
> > - set_spte_ret |= set_spte(vcpu, &sp->spt[i],
> > + tmp_spte_ret = set_spte(vcpu, &sp->spt[i],
> > pte_access, PT_PAGE_TABLE_LEVEL,
> > gfn, spte_to_pfn(sp->spt[i]),
> > true, false, host_writable);
> > +
> > + if (kvm_available_flush_tlb_with_range()
> > + && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
> > + struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
> > + & PT64_BASE_ADDR_MASK);
> > + list_add(&leaf_sp->flush_link, &flush_list);
> > + }
> > +
> > + set_spte_ret |= tmp_spte_ret;
> > +
> > }
> >
> > if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> > - kvm_flush_remote_tlbs(vcpu->kvm);
> > + kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>
> This is a bit confusing and potentially fragile. It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case. Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
That makes sense. Will update. Thanks.
>
> >
> > return nr_present;
> > }
> > --
> > 2.14.4
> >
--
Best regards
Tianyu Lan
Powered by blists - more mailing lists