lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Jul 2023 10:52:59 +1000
From:   Alistair Popple <apopple@...dia.com>
To:     SeongJae Park <sj@...nel.org>
Cc:     akpm@...ux-foundation.org, ajd@...ux.ibm.com,
        catalin.marinas@....com, fbarrat@...ux.ibm.com,
        iommu@...ts.linux.dev, jgg@...pe.ca, jhubbard@...dia.com,
        kevin.tian@...el.com, kvm@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
        mpe@...erman.id.au, nicolinc@...dia.com, npiggin@...il.com,
        robin.murphy@....com, seanjc@...gle.com, will@...nel.org,
        x86@...nel.org, zhi.wang.linux@...il.com
Subject: Re: [PATCH v2 3/5] mmu_notifiers: Call invalidate_range() when
 invalidating TLBs


SeongJae Park <sj@...nel.org> writes:

> Hi Alistair,
>
> On Wed, 19 Jul 2023 22:18:44 +1000 Alistair Popple <apopple@...dia.com> wrote:
>
>> The invalidate_range() is going to become an architecture specific mmu
>> notifier used to keep the TLB of secondary MMUs such as an IOMMU in
>> sync with the CPU page tables. Currently it is called from separate
>> code paths to the main CPU TLB invalidations. This can lead to a
>> secondary TLB not getting invalidated when required and makes it hard
>> to reason about when exactly the secondary TLB is invalidated.
>> 
>> To fix this move the notifier call to the architecture specific TLB
>> maintenance functions for architectures that have secondary MMUs
>> requiring explicit software invalidations.
>> 
>> This fixes a SMMU bug on ARM64. On ARM64 PTE permission upgrades
>> require a TLB invalidation. This invalidation is done by the
>> architecutre specific ptep_set_access_flags() which calls
>> flush_tlb_page() if required. However this doesn't call the notifier
>> resulting in infinite faults being generated by devices using the SMMU
>> if it has previously cached a read-only PTE in it's TLB.
>> 
>> Moving the invalidations into the TLB invalidation functions ensures
>> all invalidations happen at the same time as the CPU invalidation. The
>> architecture specific flush_tlb_all() routines do not call the
>> notifier as none of the IOMMUs require this.
>> 
>> Signed-off-by: Alistair Popple <apopple@...dia.com>
>> Suggested-by: Jason Gunthorpe <jgg@...pe.ca>
>
> I found below kernel NULL-dereference issue on latest mm-unstable tree, and
> bisect points me to the commit of this patch, namely
> 75c400f82d347af1307010a3e06f3aa5d831d995.
>
> To reproduce, I use 'stress-ng --bigheap $(nproc)'.  The issue happens as soon
> as it starts reclaiming memory.  I didn't dive deep into this yet, but
> reporting this issue first, since you might have an idea already.

Thanks for the report SJ!

I see the problem - current->mm can (obviously!) be NULL which is what's
leading to the NULL dereference. Instead I think on x86 I need to call
the notifier when adding the invalidate to the tlbbatch in
arch_tlbbatch_add_pending() which is equivalent to what ARM64 does.

The below should fix it. Will do a respin with this.

---

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 837e4a50281a..79c46da919b9 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -4,6 +4,7 @@
 
 #include <linux/mm_types.h>
 #include <linux/sched.h>
+#include <linux/mmu_notifier.h>
 
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
@@ -282,6 +283,7 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
 {
 	inc_mm_tlb_gen(mm);
 	cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
+	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
 }
 
 static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 0b990fb56b66..2d253919b3e8 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,7 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 
 	put_flush_tlb_info();
 	put_cpu();
-	mmu_notifier_arch_invalidate_secondary_tlbs(current->mm, 0, -1UL);
 }
 
 /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ