[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260124184555.3936797-1-clm@meta.com>
Date: Sat, 24 Jan 2026 10:45:51 -0800
From: Chris Mason <clm@...a.com>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>
CC: Chris Mason <clm@...a.com>, Andrew Morton <akpm@...ux-foundation.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Suren Baghdasaryan
<surenb@...gle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Pedro
Falcato" <pfalcato@...e.de>,
David Hildenbrand <david@...hat.com>,
"Vlastimil
Babka" <vbabka@...e.cz>, Michal Hocko <mhocko@...e.com>,
Jann Horn
<jannh@...gle.com>, <shikemeng@...weicloud.com>,
<kasong@...cent.com>, <nphamcs@...il.com>, <bhe@...hat.com>,
<baohua@...nel.org>, <chrisl@...nel.org>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH v3 11/11] mm: Use unmap_desc struct for freeing page tables.
On Wed, 21 Jan 2026 11:49:46 -0500 "Liam R. Howlett" <Liam.Howlett@...cle.com> wrote:
> Pass through the unmap_desc to free_pgtables() because it almost has
> everything necessary and is already on the stack.
>
> Updates testing code as necessary.
>
> No functional changes intended.
>
Hi everyone,
I'm running some more aggressive prompts through linux-next and this one was
flagged. Apologies in advance if it's way off:
> diff --git a/mm/memory.c b/mm/memory.c
> --- a/mm/memory.c
> +++ b/mm/memory.c
[ ... ]
> -void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> - struct vm_area_struct *vma, unsigned long pg_start,
> - unsigned long pg_end, unsigned long vma_end,
> - bool mm_wr_locked)
> +void free_pgtables(struct mmu_gather *tlb, struct unmap_desc *unmap)
> {
> struct unlink_vma_file_batch vb;
> + struct ma_state *mas = unmap->mas;
> + struct vm_area_struct *vma = unmap->first;
>
> /*
> * Note: USER_PGTABLES_CEILING may be passed as the value of pg_end and
> - * may be 0. Underflow is expected in this case. Otherwise the
> - * pagetable end is exclusive.
> - * vma_end is exclusive.
> - * The last vma address should never be larger than the pagetable end.
> + * may be 0. The underflow here is fine and expected.
> + * The vma_end is exclusive, which is fine until we use the mas_ instead
> + * of the vma iterators.
> + * For freeing the page tables to make sense, the vma_end must be larger
> + * than the pg_end, so check that after the potential underflow.
^^^^
The old comment said "should never be larger" but the new comment says
"must be larger". The WARN_ON_ONCE fires when vma_end > pg_end, so
"must be larger" appears backwards?
> + */
> + WARN_ON_ONCE(unmap->vma_end - 1 > unmap->pg_end - 1);
^^^^
On the exit_mmap() path, unmap_all_init() sets vma_end = ULONG_MAX while
pg_end = USER_PGTABLES_CEILING. On ARM, USER_PGTABLES_CEILING is defined
as TASK_SIZE (around 3GB), not zero.
Before this change, exit_mmap() explicitly passed both vma_end and pg_end
as USER_PGTABLES_CEILING, so the WARN_ON_ONCE would never trigger.
After this change, exit_mmap() uses the unmap struct from unmap_all_init()
which sets vma_end = ULONG_MAX but pg_end = USER_PGTABLES_CEILING.
On ARM this means:
WARN_ON_ONCE(ULONG_MAX - 1 > TASK_SIZE - 1)
This would fire on every process exit on ARM systems. Is this intended?
> diff --git a/mm/mmap.c b/mm/mmap.c
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1307,10 +1307,10 @@ void exit_mmap(struct mm_struct *mm)
> */
> mm_flags_set(MMF_OOM_SKIP, mm);
> mmap_write_lock(mm);
> + unmap.mm_wr_locked = true;
> mt_clear_in_rcu(&mm->mm_mt);
> - vma_iter_set(&vmi, vma->vm_end);
> - free_pgtables(&tlb, &vmi.mas, vma, FIRST_USER_ADDRESS,
> - USER_PGTABLES_CEILING, USER_PGTABLES_CEILING, true);
> + vma_iter_set(&vmi, unmap.tree_reset);
> + free_pgtables(&tlb, &unmap);
The old call passed USER_PGTABLES_CEILING for both pg_end and vma_end.
The new code uses unmap_all_init() which sets pg_end = USER_PGTABLES_CEILING
but vma_end = ULONG_MAX. This changes the semantics of the WARN_ON_ONCE
check in free_pgtables().
Powered by blists - more mailing lists