[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZL7AbLJ+RUUgzt8O@bombadil.infradead.org>
Date: Mon, 24 Jul 2023 11:18:20 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: Alistair Popple <apopple@...dia.com>,
linux-fsdevel@...r.kernel.org, linux-xfs@...r.kernel.org,
Pankaj Raghav <p.raghav@...sung.com>
Cc: SeongJae Park <sj@...nel.org>, kevin.tian@...el.com,
x86@...nel.org, ajd@...ux.ibm.com, kvm@...r.kernel.org,
linux-mm@...ck.org, catalin.marinas@....com, seanjc@...gle.com,
will@...nel.org, linux-kernel@...r.kernel.org, npiggin@...il.com,
zhi.wang.linux@...il.com, jgg@...pe.ca, iommu@...ts.linux.dev,
nicolinc@...dia.com, jhubbard@...dia.com, fbarrat@...ux.ibm.com,
akpm@...ux-foundation.org, linuxppc-dev@...ts.ozlabs.org,
linux-arm-kernel@...ts.infradead.org, robin.murphy@....com,
mcgrof@...nel.org
Subject: Re: [PATCH v2 3/5] mmu_notifiers: Call invalidate_range() when
invalidating TLBs
Cc'ing fsdevel + xfs folks as this fixes a regression tests with
XFS with generic/176.
On Thu, Jul 20, 2023 at 10:52:59AM +1000, Alistair Popple wrote:
>
> SeongJae Park <sj@...nel.org> writes:
>
> > Hi Alistair,
> >
> > On Wed, 19 Jul 2023 22:18:44 +1000 Alistair Popple <apopple@...dia.com> wrote:
> >
> >> The invalidate_range() is going to become an architecture specific mmu
> >> notifier used to keep the TLB of secondary MMUs such as an IOMMU in
> >> sync with the CPU page tables. Currently it is called from separate
> >> code paths to the main CPU TLB invalidations. This can lead to a
> >> secondary TLB not getting invalidated when required and makes it hard
> >> to reason about when exactly the secondary TLB is invalidated.
> >>
> >> To fix this move the notifier call to the architecture specific TLB
> >> maintenance functions for architectures that have secondary MMUs
> >> requiring explicit software invalidations.
> >>
> >> This fixes a SMMU bug on ARM64. On ARM64 PTE permission upgrades
> >> require a TLB invalidation. This invalidation is done by the
> >> architecutre specific ptep_set_access_flags() which calls
> >> flush_tlb_page() if required. However this doesn't call the notifier
> >> resulting in infinite faults being generated by devices using the SMMU
> >> if it has previously cached a read-only PTE in it's TLB.
> >>
> >> Moving the invalidations into the TLB invalidation functions ensures
> >> all invalidations happen at the same time as the CPU invalidation. The
> >> architecture specific flush_tlb_all() routines do not call the
> >> notifier as none of the IOMMUs require this.
> >>
> >> Signed-off-by: Alistair Popple <apopple@...dia.com>
> >> Suggested-by: Jason Gunthorpe <jgg@...pe.ca>
> >
> > I found below kernel NULL-dereference issue on latest mm-unstable tree, and
> > bisect points me to the commit of this patch, namely
> > 75c400f82d347af1307010a3e06f3aa5d831d995.
> >
> > To reproduce, I use 'stress-ng --bigheap $(nproc)'. The issue happens as soon
> > as it starts reclaiming memory. I didn't dive deep into this yet, but
> > reporting this issue first, since you might have an idea already.
>
> Thanks for the report SJ!
>
> I see the problem - current->mm can (obviously!) be NULL which is what's
> leading to the NULL dereference. Instead I think on x86 I need to call
> the notifier when adding the invalidate to the tlbbatch in
> arch_tlbbatch_add_pending() which is equivalent to what ARM64 does.
>
> The below should fix it. Will do a respin with this.
>
> ---
>
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 837e4a50281a..79c46da919b9 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -4,6 +4,7 @@
>
> #include <linux/mm_types.h>
> #include <linux/sched.h>
> +#include <linux/mmu_notifier.h>
>
> #include <asm/processor.h>
> #include <asm/cpufeature.h>
> @@ -282,6 +283,7 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
> {
> inc_mm_tlb_gen(mm);
> cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
> + mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
> }
>
> static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 0b990fb56b66..2d253919b3e8 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -1265,7 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>
> put_flush_tlb_info();
> put_cpu();
> - mmu_notifier_arch_invalidate_secondary_tlbs(current->mm, 0, -1UL);
> }
>
> /*
This patch also fixes a regression introduced on linux-next, the same
crash on arch_tlbbatch_flush() is reproducible with fstests generic/176
on XFS. This patch fixes that regression [0]. This should also close out
the syzbot crash too [1]
[0] https://gist.github.com/mcgrof/b37fc8cf7e6e1b3935242681de1a83e2
[1] https://lore.kernel.org/all/0000000000003afcb4060135a664@google.com/
Tested-by: Luis Chamberlain <mcgrof@...nel.org>
Luis
Powered by blists - more mailing lists