[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <246b547e-d7ad-44c7-9652-6f5a72828b26@lucifer.local>
Date: Tue, 11 Mar 2025 14:01:20 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: SeongJae Park <sj@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Liam R. Howlett" <howlett@...il.com>,
David Hildenbrand <david@...hat.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>, kernel-team@...a.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 9/9] mm/madvise: remove !tlb support from
madvise_{dontneed,free}_single_vma()
On Mon, Mar 10, 2025 at 10:23:18AM -0700, SeongJae Park wrote:
> madvise_dontneed_single_vma() and madvise_free_single_vma() support both
> batched tlb flushes and unbatched tlb flushes use cases depending on
> received tlb parameter's value. The supports were for safe and fine
> transition of the usages from the unbatched flushes to the batched ones.
> Now the transition is done, and therefore there is no real unbatched tlb
> flushes use case. Remove the code for supporting the no more being used
> cases.
>
> Signed-off-by: SeongJae Park <sj@...nel.org>
Obviously I support this based on previous preview :) but I wonder if we
can avoid this horrid caller_tlb pattern in the first instance.
FWIW:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
> mm/madvise.c | 19 ++-----------------
> 1 file changed, 2 insertions(+), 17 deletions(-)
>
> diff --git a/mm/madvise.c b/mm/madvise.c
> index d5f4ce3041a4..25af0a24c00b 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -795,18 +795,11 @@ static const struct mm_walk_ops madvise_free_walk_ops = {
> };
>
> static int madvise_free_single_vma(
> - struct mmu_gather *caller_tlb, struct vm_area_struct *vma,
> + struct mmu_gather *tlb, struct vm_area_struct *vma,
> unsigned long start_addr, unsigned long end_addr)
> {
> struct mm_struct *mm = vma->vm_mm;
> struct mmu_notifier_range range;
> - struct mmu_gather self_tlb;
> - struct mmu_gather *tlb;
> -
> - if (caller_tlb)
> - tlb = caller_tlb;
> - else
> - tlb = &self_tlb;
>
> /* MADV_FREE works for only anon vma at the moment */
> if (!vma_is_anonymous(vma))
> @@ -822,8 +815,6 @@ static int madvise_free_single_vma(
> range.start, range.end);
>
> lru_add_drain();
> - if (!caller_tlb)
> - tlb_gather_mmu(tlb, mm);
> update_hiwater_rss(mm);
>
> mmu_notifier_invalidate_range_start(&range);
> @@ -832,9 +823,6 @@ static int madvise_free_single_vma(
> &madvise_free_walk_ops, tlb);
> tlb_end_vma(tlb, vma);
> mmu_notifier_invalidate_range_end(&range);
> - if (!caller_tlb)
> - tlb_finish_mmu(tlb);
> -
> return 0;
> }
>
> @@ -866,10 +854,7 @@ static long madvise_dontneed_single_vma(struct mmu_gather *tlb,
> .even_cows = true,
> };
>
> - if (!tlb)
> - zap_page_range_single(vma, start, end - start, &details);
> - else
> - unmap_vma_single(tlb, vma, start, end - start, &details);
> + unmap_vma_single(tlb, vma, start, end - start, &details);
> return 0;
> }
>
> --
> 2.39.5
Powered by blists - more mailing lists