lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e298f68-36ff-496a-81d2-7124f792180d@bytedance.com>
Date: Tue, 11 Feb 2025 17:43:07 +0800
From: Qi Zheng <zhengqi.arch@...edance.com>
To: David Hildenbrand <david@...hat.com>
Cc: "Russell King (Oracle)" <linux@...linux.org.uk>,
 Ezra Buehler <ezra@...yb.ch>, linux-mm@...ck.org,
 Andrew Morton <akpm@...ux-foundation.org>,
 "Mike Rapoport (Microsoft)" <rppt@...nel.org>,
 Muchun Song <muchun.song@...ux.dev>, Vlastimil Babka <vbabka@...e.cz>,
 Ryan Roberts <ryan.roberts@....com>,
 "Vishal Moola (Oracle)" <vishal.moola@...il.com>,
 Hugh Dickins <hughd@...gle.com>, Matthew Wilcox <willy@...radead.org>,
 Peter Xu <peterx@...hat.com>, Nicolas Ferre <nicolas.ferre@...rochip.com>,
 Alexandre Belloni <alexandre.belloni@...tlin.com>,
 Claudiu Beznea <claudiu.beznea@...on.dev>,
 open list <linux-kernel@...r.kernel.org>,
 linux-arm-kernel@...ts.infradead.org
Subject: Re: [REGRESSION] NULL pointer dereference on ARM (AT91SAM9G25) during
 compaction



On 2025/2/11 17:37, David Hildenbrand wrote:
> On 11.02.25 10:29, Qi Zheng wrote:
>>
>>
>> On 2025/2/11 17:14, David Hildenbrand wrote:
>>> On 11.02.25 04:45, Qi Zheng wrote:
>>>> Hi Russell,
>>>>
>>>> On 2025/2/11 01:03, Russell King (Oracle) wrote:
>>>>> On Mon, Feb 10, 2025 at 05:49:38PM +0100, Ezra Buehler wrote:
>>>>>> When running vanilla Linux 6.13 or newer (6.14-rc2) on the
>>>>>> AT91SAM9G25-based GARDENA smart Gateway, we are seeing a NULL pointer
>>>>>> dereference resulting in a kernel panic. The culprit seems to be 
>>>>>> commit
>>>>>> fc9c45b71f43 ("arm: adjust_pte() usepte_offset_map_rw_nolock()").
>>>>>> Reverting the commit apparently fixes the issue.
>>>>>
>>>>> The blamed commit is buggy:
>>>>>
>>>>> arch/arm/include/asm/tlbflush.h:
>>>>> #define update_mmu_cache(vma, addr, ptep) \
>>>>>            update_mmu_cache_range(NULL, vma, addr, ptep, 1)
>>>>>
>>>>> So vmf can be NULL. This didn't used to matter before this commit,
>>>>> because vmf was not used by ARM's update_mmu_cache_range(). However,
>>>>> the commit introduced a dereference of it, which now causes a NULL
>>>>> point dereference.
>>>>>
>>>>> Not sure what the correct solution is, but at a guess, both:
>>>>>
>>>>>      if (ptl != vmf->ptl)
>>>>>
>>>>> need to become:
>>>>>
>>>>>      if (!vmf || ptl != vmf->ptl)
>>>>
>>>> No, we can't do that, because without using split PTE locks, we would
>>>> use shared mm->page_table_lock, which would create a deadlock.
>>>
>>> Maybe we can simply special-case on CONFIG_SPLIT_PTE_PTLOCKS ?
>>>
>>> if (IS_ENABLED(CONFIG_SPLIT_PTE_PTLOCKS)) {
>>
>> In this case, if two vmas map the same PTE page, then the same PTE lock
>> will be held repeatedly. Right?
> 
> Hmm, the comment says:
> 
>          /*
>           * This is called while another page table is mapped, so we
>           * must use the nested version.  This also means we need to
>           * open-code the spin-locking.
>           */
> 
> "another page table" implies that it cannot be the same. But maybe that 
> comment was also wrong?

I don't see make_coherent() ensuring this when traversing vma. I
therefore propose the following changes:

diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
index 2bec87c3327d2..dddbca9a2597e 100644
--- a/arch/arm/mm/fault-armv.c
+++ b/arch/arm/mm/fault-armv.c
@@ -61,8 +61,41 @@ static int do_adjust_pte(struct vm_area_struct *vma, 
unsigned long address,
         return ret;
  }

+#if defined(CONFIG_SPLIT_PTE_PTLOCKS)
+/*
+ * If we are using split PTE locks, then we need to take the pte
+ * lock here.  Otherwise we are using shared mm->page_table_lock
+ * which is already locked, thus cannot take it.
+ */
+static inline bool do_pte_lock(spinlock_t *ptl, pmd_t pmdval, pmd_t *pmd)
+{
+       /*
+        * Use nested version here to indicate that we are already
+        * holding one similar spinlock.
+        */
+       spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
+       if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
+               spin_unlock(ptl);
+               return false;
+       }
+
+       return true;
+}
+
+static inline void do_pte_unlock(spinlock_t *ptl)
+{
+       spin_unlock(ptl);
+}
+#else /* !defined(CONFIG_SPLIT_PTE_PTLOCKS) */
+static inline bool do_pte_lock(spinlock_t *ptl)
+{
+       return true;
+}
+static inline void do_pte_unlock(spinlock_t *ptl) {}
+#endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */
+
  static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
-                     unsigned long pfn, struct vm_fault *vmf)
+                     unsigned long pfn)
  {
         spinlock_t *ptl;
         pgd_t *pgd;
@@ -99,23 +132,14 @@ static int adjust_pte(struct vm_area_struct *vma, 
unsigned long address,
         if (!pte)
                 return 0;

-       /*
-        * If we are using split PTE locks, then we need to take the page
-        * lock here.  Otherwise we are using shared mm->page_table_lock
-        * which is already locked, thus cannot take it.
-        */
-       if (ptl != vmf->ptl) {
-               spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
-               if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
-                       pte_unmap_unlock(pte, ptl);
-                       goto again;
-               }
+       if (!do_pte_lock(ptl, pmdval, pmd)) {
+               pte_unmap(pte);
+               goto again;
         }

         ret = do_adjust_pte(vma, address, pfn, pte);

-       if (ptl != vmf->ptl)
-               spin_unlock(ptl);
+       do_pte_unlock(ptl);
         pte_unmap(pte);

         return ret;
@@ -123,16 +147,17 @@ static int adjust_pte(struct vm_area_struct *vma, 
unsigned long address,

  static void
  make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
-             unsigned long addr, pte_t *ptep, unsigned long pfn,
-             struct vm_fault *vmf)
+             unsigned long addr, pte_t *ptep, unsigned long pfn)
  {
         struct mm_struct *mm = vma->vm_mm;
         struct vm_area_struct *mpnt;
         unsigned long offset;
+       unsigned long start;
         pgoff_t pgoff;
         int aliases = 0;

         pgoff = vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT);
+       start = ALIGN_DOWN(addr, PMD_SIZE);

         /*
          * If we have any shared mappings that are in the same mm
@@ -141,6 +166,8 @@ make_coherent(struct address_space *mapping, struct 
vm_area_struct *vma,
          */
         flush_dcache_mmap_lock(mapping);
         vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
+               unsigned long mpnt_addr;
+
                 /*
                  * If this VMA is not in our MM, we can ignore it.
                  * Note that we intentionally mask out the VMA
@@ -151,7 +178,14 @@ make_coherent(struct address_space *mapping, struct 
vm_area_struct *vma,
                 if (!(mpnt->vm_flags & VM_MAYSHARE))
                         continue;
                 offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
-               aliases += adjust_pte(mpnt, mpnt->vm_start + offset, 
pfn, vmf);
+               mpnt_addr = mpnt->vm_start + offset;
+               /*
+                * If mpnt_addr and addr are mapped to the same PTE page,
+                * also skip this vma.
+                */
+               if (mpnt_addr >= start && mpnt_addr - start < PMD_SIZE)
+                       continue;
+               aliases += adjust_pte(mpnt, mpnt_addr, pfn);
         }
         flush_dcache_mmap_unlock(mapping);
         if (aliases)
@@ -194,7 +228,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, 
struct vm_area_struct *vma,
                 __flush_dcache_folio(mapping, folio);
         if (mapping) {
                 if (cache_is_vivt())
-                       make_coherent(mapping, vma, addr, ptep, pfn, vmf);
+                       make_coherent(mapping, vma, addr, ptep, pfn);
                 else if (vma->vm_flags & VM_EXEC)
                         __flush_icache_all();
         }

Make sense?

> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ