[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <634f0236b4b4effc528b06e108b7bda51cd0ea2c.1734526570.git.zhengqi.arch@bytedance.com>
Date: Wed, 18 Dec 2024 21:04:43 +0800
From: Qi Zheng <zhengqi.arch@...edance.com>
To: peterz@...radead.org,
tglx@...utronix.de,
david@...hat.com,
jannh@...gle.com,
hughd@...gle.com,
yuzhao@...gle.com,
willy@...radead.org,
muchun.song@...ux.dev,
vbabka@...nel.org,
lorenzo.stoakes@...cle.com,
akpm@...ux-foundation.org,
rientjes@...gle.com,
vishal.moola@...il.com
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Qi Zheng <zhengqi.arch@...edance.com>,
linux-arm-kernel@...ts.infradead.org
Subject: [PATCH v2 07/15] arm64: pgtable: move pagetable_dtor() to __tlb_remove_table()
Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page
table pages can be freed together (regardless of whether RCU is used).
This prevents the use-after-free problem where the ptlock is freed
immediately but the page table pages is freed later via RCU.
Page tables shouldn't have swap cache, so use pagetable_free() instead of
free_page_and_swap_cache() to free page table pages.
Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com>
Suggested-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: linux-arm-kernel@...ts.infradead.org
---
arch/arm64/include/asm/tlb.h | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index 408d0f36a8a8f..93591a80b5bfb 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -9,11 +9,13 @@
#define __ASM_TLB_H
#include <linux/pagemap.h>
-#include <linux/swap.h>
static inline void __tlb_remove_table(void *_table)
{
- free_page_and_swap_cache((struct page *)_table);
+ struct ptdesc *ptdesc = (struct ptdesc *)_table;
+
+ pagetable_dtor(ptdesc);
+ pagetable_free(ptdesc);
}
#define tlb_flush tlb_flush
@@ -82,7 +84,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
{
struct ptdesc *ptdesc = page_ptdesc(pte);
- pagetable_dtor(ptdesc);
tlb_remove_ptdesc(tlb, ptdesc);
}
@@ -92,7 +93,6 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
{
struct ptdesc *ptdesc = virt_to_ptdesc(pmdp);
- pagetable_dtor(ptdesc);
tlb_remove_ptdesc(tlb, ptdesc);
}
#endif
@@ -106,7 +106,6 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp,
if (!pgtable_l4_enabled())
return;
- pagetable_dtor(ptdesc);
tlb_remove_ptdesc(tlb, ptdesc);
}
#endif
@@ -120,7 +119,6 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp,
if (!pgtable_l5_enabled())
return;
- pagetable_dtor(ptdesc);
tlb_remove_ptdesc(tlb, ptdesc);
}
#endif
--
2.20.1
Powered by blists - more mailing lists