[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee44764b-b9fe-431d-8b84-08fce6b5df75@linux.intel.com>
Date: Thu, 28 Aug 2025 13:31:20 +0800
From: Baolu Lu <baolu.lu@...ux.intel.com>
To: Dave Hansen <dave.hansen@...el.com>, "Tian, Kevin"
<kevin.tian@...el.com>, Jason Gunthorpe <jgg@...dia.com>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>, Jann Horn <jannh@...gle.com>,
Vasant Hegde <vasant.hegde@....com>, Alistair Popple <apopple@...dia.com>,
Peter Zijlstra <peterz@...radead.org>, Uladzislau Rezki <urezki@...il.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Andy Lutomirski <luto@...nel.org>, "Lai, Yi1" <yi1.lai@...el.com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"security@...nel.org" <security@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>, vishal.moola@...il.com,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH v3 1/1] iommu/sva: Invalidate KVA range on kernel TLB
flush
On 8/28/25 07:31, Dave Hansen wrote:
> On 8/27/25 03:58, Baolu Lu wrote:
>> Following the insights above, I wrote the code as follows. Does it look
>> good?
>
> I'd really prefer an actual diff that compiles. Because:
>
>> #ifdef CONFIG_ASYNC_PGTABLE_FREE
>> /* a 'struct ptdesc' that needs its destructor run */
>> #define ASYNC_PGTABLE_FREE_DTOR BIT(NR_PAGEFLAGS)
>
> This doesn't work, does it? Don't you need to _allocate_ a new bit?
> Wouldn't this die a hilariously horrible death if NR_PAGEFLAGS==64? ;)
>
> Also, I'm pretty sure you can't just go setting random bits in this
> because it aliases with page flags.
>
> ...
>> static void kernel_pgtable_work_func(struct work_struct *work)
>> {
>> struct ptdesc *ptdesc, *next;
>> LIST_HEAD(page_list);
>>
>> spin_lock(&kernel_pgtable_work.lock);
>> list_splice_tail_init(&kernel_pgtable_work.list, &page_list);
>> spin_unlock(&kernel_pgtable_work.lock);
>>
>> iommu_sva_invalidate_kva_range(0, TLB_FLUSH_ALL);
>>
>> list_for_each_entry_safe(ptdesc, next, &page_list, pt_list) {
>> list_del(&ptdesc->pt_list);
>> if (ptdesc->__page_flags & ASYNC_PGTABLE_FREE_DTOR)
>> pagetable_dtor_free(ptdesc);
>> else
>> __free_page(ptdesc_page(ptdesc));
>> }
>> }
> What I had in mind was that kernel_pgtable_work_func() only does:
>
> __free_pages(page, compound_order(page));
>
> All of the things that queue gunk to the list do the legwork of making
> the work_func() simple. This also guides them toward proving a single,
> compound page because _they_ have to do the work if they don't want
> something simple.
>
>> void kernel_pgtable_async_free_dtor(struct ptdesc *ptdesc)
>> {
>> spin_lock(&kernel_pgtable_work.lock);
>> ptdesc->__page_flags |= ASYNC_PGTABLE_FREE_DTOR;
>> list_add(&ptdesc->pt_list, &kernel_pgtable_work.list);
>> spin_unlock(&kernel_pgtable_work.lock);
>>
>> schedule_work(&kernel_pgtable_work.work);
>> }
>>
>> void kernel_pgtable_async_free_page_list(struct list_head *list)
>> {
>> spin_lock(&kernel_pgtable_work.lock);
>> list_splice_tail(list, &kernel_pgtable_work.list);
>> spin_unlock(&kernel_pgtable_work.lock);
>>
>> schedule_work(&kernel_pgtable_work.work);
>> }
>>
>> void kernel_pgtable_async_free_page(struct page *page)
>> {
>> spin_lock(&kernel_pgtable_work.lock);
>> list_add(&page_ptdesc(page)->pt_list, &kernel_pgtable_work.list);
>> spin_unlock(&kernel_pgtable_work.lock);
>>
>> schedule_work(&kernel_pgtable_work.work);
>> }
>
> I wouldn't have three of these, I'd have _one_. If you want to free a
> bunch of pages, then just call it a bunch of times. This is not
> performance critical.
>
> Oh, and the ptdesc flag shouldn't be for "I need to be asynchronously
> freed". All kernel PTE pages should ideally set it at allocation so you
> can do this:
>
> static inline void pagetable_free(struct ptdesc *pt)
> {
> struct page *page = ptdesc_page(pt);
>
> if (ptdesc->some_field | PTDESC_KERNEL)
> kernel_pgtable_async_free_page(page);
> else
> __free_pages(page, compound_order(page));
> }
>
> The folks that, today, call pagetable_dtor_free() don't have to do
> _anything_ at free time. They just set the PTDESC_KERNEL bit at
> allocation time.
>
> See how that code reads? "If you have a kernel page table page, you must
> asynchronously free it." That's actually meaningful.
>
> I'm pretty sure the lower 24 bits of ptdesc->__page_type are free. So
> I'd just do this:
>
> #define PTDESC_TYPE_KERNEL BIT(0)
>
> Then, something needs to set:
>
> ptdesc->__page_type |= PTDESC_TYPE_KERNEL;
>
> That could happen as late as here:
>
> static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
> {
> struct ptdesc *ptdesc = virt_to_ptdesc(pte);
>
> // here
>
> pagetable_dtor_free(ptdesc);
> }
>
> Or as early as ptdesc allocation/constructor time. Then you check for
> PTDESC_TYPE_KERNEL in pagetable_free().
Very appreciated for your review comments. I changed the patch
accordingly like this:
[PATCH 1/2] mm: Conditionally defer freeing of kernel page table pages
On x86 and other architectures that map the kernel's virtual address space
into the upper portion of every process's page table, the IOMMU's paging
structure caches can become stale when the CPU page table is shared with
IOMMU in the Shared Virtual Address (SVA) context. This occurs when a page
used for the kernel's page tables is freed and reused without the IOMMU
being notified.
While the IOMMU driver is notified of changes to user virtual address
mappings, there is no similar notification mechanism for kernel page
table changes. This can lead to data corruption or system instability
when Shared Virtual Address (SVA) is enabled, as the IOMMU's internal
caches may retain stale entries for kernel virtual addresses.
This introduces a conditional asynchronous mechanism, enabled by
CONFIG_ASYNC_PGTABLE_FREE. When enabled, this mechanism defers the freeing
of pages that are used as page tables for kernel address mappings. These
pages are now queued to a work struct instead of being freed immediately.
This deferred freeing provides a safe context for a future patch to add
an IOMMU-specific callback, which might be expensive on large-scale
systems. This ensures the necessary IOMMU cache invalidation is performed
before the page is finally returned to the page allocator outside of any
critical, non-sleepable path.
In the current kernel, some page table pages are allocated with an
associated struct ptdesc, while others are not. Those without a ptdesc are
freed using free_pages() and its variants, which bypasses the destructor
that pagetable_dtor_free() would run. While the long-term plan is to
convert all page table pages to use struct ptdesc, this uses a temporary
flag within ptdesc to indicate whether a page needs a destructor,
considering that this aims to fix a potential security issue in IOMMU SVA.
The flag and its associated logic can be removed once the conversion is
complete.
Suggested-by: Dave Hansen <dave.hansen@...ux.intel.com>
Signed-off-by: Lu Baolu <baolu.lu@...ux.intel.com>
---
arch/x86/mm/init_64.c | 9 ++++++-
arch/x86/mm/pat/set_memory.c | 5 +++-
arch/x86/mm/pgtable.c | 9 ++++++-
include/asm-generic/pgalloc.h | 15 +++++++++++-
include/linux/mm_types.h | 8 ++++++-
mm/Kconfig | 3 +++
mm/pgtable-generic.c | 45 +++++++++++++++++++++++++++++++++++
7 files changed, 89 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 76e33bd7c556..8a050cfa0f8d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1013,7 +1013,14 @@ static void __meminit free_pagetable(struct page
*page, int order)
free_reserved_pages(page, nr_pages);
#endif
} else {
- free_pages((unsigned long)page_address(page), order);
+ /*
+ * 'order == 0' means a kernel page table page, otherwise it's
+ * data pages pointed by a huge page leaf pte.
+ */
+ if (IS_ENABLED(CONFIG_ASYNC_PGTABLE_FREE) && !order)
+ kernel_pgtable_async_free(page_ptdesc(page));
+ else
+ free_pages((unsigned long)page_address(page), order);
}
}
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 8834c76f91c9..9a552e93d1f1 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -438,7 +438,10 @@ static void cpa_collapse_large_pages(struct
cpa_data *cpa)
list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
list_del(&ptdesc->pt_list);
- __free_page(ptdesc_page(ptdesc));
+ if (IS_ENABLED(CONFIG_ASYNC_PGTABLE_FREE))
+ kernel_pgtable_async_free(ptdesc);
+ else
+ __free_page(ptdesc_page(ptdesc));
}
}
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index ddf248c3ee7d..6f400cff2ad4 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -757,7 +757,14 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
free_page((unsigned long)pmd_sv);
- pmd_free(&init_mm, pmd);
+ if (IS_ENABLED(CONFIG_ASYNC_PGTABLE_FREE)) {
+ struct ptdesc *ptdesc = virt_to_ptdesc(pmd);
+
+ ptdesc->__page_type |= PTDESC_TYPE_KERNEL;
+ kernel_pgtable_async_free(ptdesc);
+ } else {
+ pmd_free(&init_mm, pmd);
+ }
return 1;
}
diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 3c8ec3bfea44..d92dc5289ef4 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -46,6 +46,12 @@ static inline pte_t
*pte_alloc_one_kernel_noprof(struct mm_struct *mm)
#define pte_alloc_one_kernel(...)
alloc_hooks(pte_alloc_one_kernel_noprof(__VA_ARGS__))
#endif
+#ifdef CONFIG_ASYNC_PGTABLE_FREE
+void kernel_pgtable_async_free(struct ptdesc *ptdesc);
+#else
+static inline void kernel_pgtable_async_free(struct ptdesc *ptdesc) {}
+#endif
+
/**
* pte_free_kernel - free PTE-level kernel page table memory
* @mm: the mm_struct of the current context
@@ -53,7 +59,14 @@ static inline pte_t
*pte_alloc_one_kernel_noprof(struct mm_struct *mm)
*/
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
- pagetable_dtor_free(virt_to_ptdesc(pte));
+ struct ptdesc *ptdesc = virt_to_ptdesc(pte);
+
+ ptdesc->__page_type |= PTDESC_TYPE_KERNEL;
+
+ if (IS_ENABLED(CONFIG_ASYNC_PGTABLE_FREE))
+ kernel_pgtable_async_free(ptdesc);
+ else
+ pagetable_dtor_free(ptdesc);
}
/**
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 08bc2442db93..f74c6b167e6a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -539,7 +539,7 @@ FOLIO_MATCH(compound_head, _head_3);
* @pt_share_count: Used for HugeTLB PMD page table share count.
* @_pt_pad_2: Padding to ensure proper alignment.
* @ptl: Lock for the page table.
- * @__page_type: Same as page->page_type. Unused for page tables.
+ * @__page_type: Same as page->page_type.
* @__page_refcount: Same as page refcount.
* @pt_memcg_data: Memcg data. Tracked for page tables here.
*
@@ -637,6 +637,12 @@ static inline void ptdesc_pmd_pts_init(struct
ptdesc *ptdesc)
}
#endif
+/*
+ * Used by ptdesc::__page_type to indicate a page for kernel address page
+ * table mapping.
+ */
+#define PTDESC_TYPE_KERNEL BIT(0)
+
/*
* Used for sizing the vmemmap region on some architectures
*/
diff --git a/mm/Kconfig b/mm/Kconfig
index e443fe8cd6cf..369f3259e6da 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1346,6 +1346,9 @@ config LOCK_MM_AND_FIND_VMA
config IOMMU_MM_DATA
bool
+config ASYNC_PGTABLE_FREE
+ bool
+
config EXECMEM
bool
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 567e2d084071..024d42480c51 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -406,3 +406,48 @@ pte_t *__pte_offset_map_lock(struct mm_struct *mm,
pmd_t *pmd,
pte_unmap_unlock(pte, ptl);
goto again;
}
+
+#ifdef CONFIG_ASYNC_PGTABLE_FREE
+static void kernel_pgtable_work_func(struct work_struct *work);
+
+static struct {
+ struct list_head list;
+ /* protect above ptdesc lists */
+ spinlock_t lock;
+ struct work_struct work;
+} kernel_pgtable_work = {
+ .list = LIST_HEAD_INIT(kernel_pgtable_work.list),
+ .lock = __SPIN_LOCK_UNLOCKED(kernel_pgtable_work.lock),
+ .work = __WORK_INITIALIZER(kernel_pgtable_work.work,
kernel_pgtable_work_func),
+};
+
+static void kernel_pgtable_work_func(struct work_struct *work)
+{
+ struct ptdesc *ptdesc, *next;
+ LIST_HEAD(page_list);
+
+ spin_lock(&kernel_pgtable_work.lock);
+ list_splice_tail_init(&kernel_pgtable_work.list, &page_list);
+ spin_unlock(&kernel_pgtable_work.lock);
+
+ list_for_each_entry_safe(ptdesc, next, &page_list, pt_list) {
+ list_del(&ptdesc->pt_list);
+ if (ptdesc->__page_type & PTDESC_TYPE_KERNEL) {
+ pagetable_dtor_free(ptdesc);
+ } else {
+ struct page *page = ptdesc_page(ptdesc);
+
+ __free_pages(page, compound_order(page));
+ }
+ }
+}
+
+void kernel_pgtable_async_free(struct ptdesc *ptdesc)
+{
+ spin_lock(&kernel_pgtable_work.lock);
+ list_add(&ptdesc->pt_list, &kernel_pgtable_work.list);
+ spin_unlock(&kernel_pgtable_work.lock);
+
+ schedule_work(&kernel_pgtable_work.work);
+}
+#endif
--
2.43.0
Thanks,
baolu
Powered by blists - more mailing lists