[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <3013350.qntrAZtlsQ@townsend>
Date: Tue, 05 Feb 2019 14:52:13 +1100
From: Alistair Popple <alistair@...ple.id.au>
To: Andrea Arcangeli <aarcange@...hat.com>
Cc: Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Michael Ellerman <mpe@...erman.id.au>,
Alexey Kardashevskiy <aik@...abs.ru>,
Mark Hairgrove <mhairgrove@...dia.com>,
Balbir Singh <bsingharora@...il.com>,
David Gibson <david@...son.dropbear.id.au>,
Jerome Glisse <jglisse@...hat.com>,
Jason Wang <jasowang@...hat.com>, linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH] powerpc/powernv/npu: Remove redundant change_pte() hook
On Thursday, 31 January 2019 12:11:06 PM AEDT Andrea Arcangeli wrote:
> On Thu, Jan 31, 2019 at 06:30:22PM +0800, Peter Xu wrote:
> > The change_pte() notifier was designed to use as a quick path to
> > update secondary MMU PTEs on write permission changes or PFN changes.
> > For KVM, it could reduce the vm-exits when vcpu faults on the pages
> > that was touched up by KSM. It's not used to do cache invalidations,
> > for example, if we see the notifier will be called before the real PTE
> > update after all (please see set_pte_at_notify that set_pte_at was
> > called later).
Thanks for the fixup. I didn't realise that invalidate_range() always gets
called but I now see that is the case so this change looks good to me as well.
Reviewed-by: Alistair Popple <alistair@...ple.id.au>
> > All the necessary cache invalidation should all be done in
> > invalidate_range() already.
> >
> > CC: Benjamin Herrenschmidt <benh@...nel.crashing.org>
> > CC: Paul Mackerras <paulus@...ba.org>
> > CC: Michael Ellerman <mpe@...erman.id.au>
> > CC: Alistair Popple <alistair@...ple.id.au>
> > CC: Alexey Kardashevskiy <aik@...abs.ru>
> > CC: Mark Hairgrove <mhairgrove@...dia.com>
> > CC: Balbir Singh <bsingharora@...il.com>
> > CC: David Gibson <david@...son.dropbear.id.au>
> > CC: Andrea Arcangeli <aarcange@...hat.com>
> > CC: Jerome Glisse <jglisse@...hat.com>
> > CC: Jason Wang <jasowang@...hat.com>
> > CC: linuxppc-dev@...ts.ozlabs.org
> > CC: linux-kernel@...r.kernel.org
> > Signed-off-by: Peter Xu <peterx@...hat.com>
> > ---
> >
> > arch/powerpc/platforms/powernv/npu-dma.c | 10 ----------
> > 1 file changed, 10 deletions(-)
>
> Reviewed-by: Andrea Arcangeli <aarcange@...hat.com>
>
> It doesn't make sense to implement change_pte as an invalidate,
> change_pte is not compulsory to implement so if one wants to have
> invalidates only, change_pte method shouldn't be implemented in the
> first place and the common code will guarantee to invoke the range
> invalidates instead.
>
> Currently the whole change_pte optimization is effectively disabled as
> noted in past discussions with Jerome (because of the range
> invalidates that always surrounds it), so we need to revisit the whole
> change_pte logic and decide it to re-enable it or to drop it as a
> whole, but in the meantime it's good to cleanup spots like below that
> should leave change_pte alone.
>
> There are several examples of mmu_notifiers_ops in the kernel that
> don't implement change_pte, in fact it's the majority. Of all mmu
> notifier users, only nv_nmmu_notifier_ops, intel_mmuops_change and
> kvm_mmu_notifier_ops implements change_pte and as Peter found out by
> source review nv_nmmu_notifier_ops, intel_mmuops_change are wrong
> about it and should stop implementing it as an invalidate.
>
> In short change_pte is only implemented correctly from KVM which can
> really updates the spte and flushes the TLB but the spte update
> remains and could avoid a vmexit if we figure out how to re-enable the
> optimization safely (the TLB fill after change_pte in KVM EPT/shadow
> secondary MMU will be looked up by the CPU in hardware).
>
> If change_pte is implemented, it should update the mapping like KVM
> does and not do an invalidate.
>
> Thanks,
> Andrea
>
> > diff --git a/arch/powerpc/platforms/powernv/npu-dma.c
> > b/arch/powerpc/platforms/powernv/npu-dma.c index
> > 3f58c7dbd581..c003b29d870e 100644
> > --- a/arch/powerpc/platforms/powernv/npu-dma.c
> > +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> > @@ -917,15 +917,6 @@ static void pnv_npu2_mn_release(struct mmu_notifier
> > *mn,>
> > mmio_invalidate(npu_context, 0, ~0UL);
> >
> > }
> >
> > -static void pnv_npu2_mn_change_pte(struct mmu_notifier *mn,
> > - struct mm_struct *mm,
> > - unsigned long address,
> > - pte_t pte)
> > -{
> > - struct npu_context *npu_context = mn_to_npu_context(mn);
> > - mmio_invalidate(npu_context, address, PAGE_SIZE);
> > -}
> > -
> >
> > static void pnv_npu2_mn_invalidate_range(struct mmu_notifier *mn,
> >
> > struct mm_struct *mm,
> > unsigned long start, unsigned long end)
> >
> > @@ -936,7 +927,6 @@ static void pnv_npu2_mn_invalidate_range(struct
> > mmu_notifier *mn,>
> > static const struct mmu_notifier_ops nv_nmmu_notifier_ops = {
> >
> > .release = pnv_npu2_mn_release,
> >
> > - .change_pte = pnv_npu2_mn_change_pte,
> >
> > .invalidate_range = pnv_npu2_mn_invalidate_range,
> >
> > };
Powered by blists - more mailing lists