[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aEv-JjY4lUPNqyjZ@pc636>
Date: Fri, 13 Jun 2025 12:32:06 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Alexander Gordeev <agordeev@...ux.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, sparclinux@...r.kernel.org,
xen-devel@...ts.xenproject.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, Hugh Dickins <hughd@...gle.com>,
Nicholas Piggin <npiggin@...il.com>,
Guenter Roeck <linux@...ck-us.net>, Juergen Gross <jgross@...e.com>,
Jeremy Fitzhardinge <jeremy@...p.org>,
Ryan Roberts <ryan.roberts@....com>
Subject: Re: [PATCH 2/6] mm: Lock kernel page tables before entering lazy MMU
mode
On Thu, Jun 12, 2025 at 07:36:09PM +0200, Alexander Gordeev wrote:
> As a follow-up to commit 691ee97e1a9d ("mm: fix lazy mmu docs and
> usage") take a step forward and protect with a lock not only user,
> but also kernel mappings before entering the lazy MMU mode. With
> that the semantics of arch_enter|leave_lazy_mmu_mode() callbacks
> is consolidated, which allows further simplifications.
>
> The effect of this consolidation is not fully preemptible (Real-Time)
> kernels can not enter the context switch while the lazy MMU mode is
> active - which is easier to comprehend.
>
> Signed-off-by: Alexander Gordeev <agordeev@...ux.ibm.com>
> ---
> include/linux/pgtable.h | 12 ++++++------
> mm/kasan/shadow.c | 5 -----
> mm/memory.c | 5 ++++-
> mm/vmalloc.c | 6 ++++++
> 4 files changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 0b6e1f781d86..33bf2b13c219 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -224,12 +224,12 @@ static inline int pmd_dirty(pmd_t pmd)
> * a raw PTE pointer after it has been modified are not guaranteed to be
> * up to date.
> *
> - * In the general case, no lock is guaranteed to be held between entry and exit
> - * of the lazy mode. So the implementation must assume preemption may be enabled
> - * and cpu migration is possible; it must take steps to be robust against this.
> - * (In practice, for user PTE updates, the appropriate page table lock(s) are
> - * held, but for kernel PTE updates, no lock is held). Nesting is not permitted
> - * and the mode cannot be used in interrupt context.
> + * For PREEMPT_RT kernels implementation must assume that preemption may
> + * be enabled and cpu migration is possible between entry and exit of the
> + * lazy MMU mode; it must take steps to be robust against this. There is
> + * no such assumption for non-PREEMPT_RT kernels, since both kernel and
> + * user page tables are protected with a spinlock while in lazy MMU mode.
> + * Nesting is not permitted and the mode cannot be used in interrupt context.
> */
> #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
> #define arch_enter_lazy_mmu_mode() do {} while (0)
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index d2c70cd2afb1..45115bd770a9 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -313,12 +313,10 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> __memset(page_to_virt(page), KASAN_VMALLOC_INVALID, PAGE_SIZE);
> pte = pfn_pte(page_to_pfn(page), PAGE_KERNEL);
>
> - spin_lock(&init_mm.page_table_lock);
> if (likely(pte_none(ptep_get(ptep)))) {
> set_pte_at(&init_mm, addr, ptep, pte);
> data->pages[index] = NULL;
> }
> - spin_unlock(&init_mm.page_table_lock);
>
> return 0;
> }
> @@ -465,13 +463,10 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
>
> page = (unsigned long)__va(pte_pfn(ptep_get(ptep)) << PAGE_SHIFT);
>
> - spin_lock(&init_mm.page_table_lock);
> -
> if (likely(!pte_none(ptep_get(ptep)))) {
> pte_clear(&init_mm, addr, ptep);
> free_page(page);
> }
> - spin_unlock(&init_mm.page_table_lock);
>
> return 0;
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index 71b3d3f98999..1ddc532b1f13 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3017,6 +3017,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
> pte = pte_offset_kernel(pmd, addr);
> if (!pte)
> return err;
> + spin_lock(&init_mm.page_table_lock);
> } else {
> if (create)
> pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
> @@ -3042,7 +3043,9 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
>
> arch_leave_lazy_mmu_mode();
>
> - if (mm != &init_mm)
> + if (mm == &init_mm)
> + spin_unlock(&init_mm.page_table_lock);
> + else
> pte_unmap_unlock(mapped_pte, ptl);
>
> *mask |= PGTBL_PTE_MODIFIED;
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ab986dd09b6a..57b11000ae36 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -105,6 +105,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> if (!pte)
> return -ENOMEM;
>
> + spin_lock(&init_mm.page_table_lock);
>
This is not good. We introduce another bottle-neck.
--
Uladzislau Rezki
Powered by blists - more mailing lists