[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y5of3zZtfogg1Rml@google.com>
Date: Wed, 14 Dec 2022 19:11:27 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: Hou Wenlong <houwenlong.hwl@...group.com>, kvm@...r.kernel.org,
David Matlack <dmatlack@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Lan Tianyu <Tianyu.Lan@...rosoft.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/6] KVM: x86/mmu: Fix wrong gfn range of tlb flushing
in kvm_set_pte_rmapp()
On Wed, Dec 14, 2022, Lai Jiangshan wrote:
> On Thu, Oct 13, 2022 at 1:00 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > > +/* Flush the given page (huge or not) of guest memory. */
> > > +static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level)
> > > +{
> > > + u64 pages = KVM_PAGES_PER_HPAGE(level);
> > > +
> >
> > Rather than require the caller to align gfn, what about doing gfn_round_for_level()
> > in this helper? It's a little odd that the caller needs to align gfn but doesn't
> > have to compute the size.
> >
> > I'm 99% certain kvm_set_pte_rmap() is the only path that doesn't already align the
> > gfn, but it's nice to not have to worry about getting this right, e.g. alternatively
> > this helper could WARN if the gfn is misaligned, but that's _more work.
> >
> > kvm_flush_remote_tlbs_with_address(kvm, gfn_round_for_level(gfn, level),
> > KVM_PAGES_PER_HPAGE(level);
> >
> > If no one objects, this can be done when the series is applied, i.e. no need to
> > send v5 just for this.
> >
>
> Hello Paolo, Sean, Hou,
>
> It seems the patchset has not been queued. I believe it does
> fix bugs.
It's on my list of things to get merged for 6.3. I haven't been more agressive
in getting it queued because I assume there are very few KVM-on-HyperV users that
are likely to be affected.
Powered by blists - more mailing lists