[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0003a78b-e66f-41a5-9244-89c2c430cfa4@redhat.com>
Date: Tue, 3 Jun 2025 14:15:16 +0200
From: David Hildenbrand <david@...hat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Barry Song <21cnbao@...il.com>
Cc: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Barry Song <v-songbaohua@...o.com>, "Liam R. Howlett"
<Liam.Howlett@...cle.com>, Vlastimil Babka <vbabka@...e.cz>,
Jann Horn <jannh@...gle.com>, Suren Baghdasaryan <surenb@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
Tangquan Zheng <zhengtangquan@...o.com>
Subject: Re: [PATCH RFC] mm: madvise: use walk_page_range_vma() for
madvise_free_single_vma()
On 03.06.25 11:41, Lorenzo Stoakes wrote:
> On Tue, Jun 03, 2025 at 08:47:04PM +1200, Barry Song wrote:
>> On Tue, Jun 3, 2025 at 6:11 PM Dev Jain <dev.jain@....com> wrote:
>>>
>>>
>>> On 03/06/25 7:01 am, Barry Song wrote:
>>>> From: Barry Song <v-songbaohua@...o.com>
>>>>
>>>> We've already found the VMA before calling madvise_free_single_vma(),
>>>> so calling walk_page_range() and doing find_vma() again seems
>>>> unnecessary. It also prevents potential optimizations for MADV_FREE
>>>> to use a per-VMA lock.
>>>>
>>>> Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>
>>>> Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
>>>> Cc: David Hildenbrand <david@...hat.com>
>>>> Cc: Vlastimil Babka <vbabka@...e.cz>
>>>> Cc: Jann Horn <jannh@...gle.com>
>>>> Cc: Suren Baghdasaryan <surenb@...gle.com>
>>>> Cc: Lokesh Gidra <lokeshgidra@...gle.com>
>>>> Cc: Tangquan Zheng <zhengtangquan@...o.com>
>>>> Signed-off-by: Barry Song <v-songbaohua@...o.com>
>>>> ---
>>>> mm/madvise.c | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/madvise.c b/mm/madvise.c
>>>> index d408ffa404b3..c6a28a2d3ff8 100644
>>>> --- a/mm/madvise.c
>>>> +++ b/mm/madvise.c
>>>> @@ -826,7 +826,7 @@ static int madvise_free_single_vma(struct madvise_behavior *madv_behavior,
>>>>
>>>> mmu_notifier_invalidate_range_start(&range);
>>>> tlb_start_vma(tlb, vma);
>>>> - walk_page_range(vma->vm_mm, range.start, range.end,
>>>> + walk_page_range_vma(vma, range.start, range.end,
>>>> &madvise_free_walk_ops, tlb);
>>>> tlb_end_vma(tlb, vma);
>>>> mmu_notifier_invalidate_range_end(&range);
>>>
>>> Can similar optimizations be made in madvise_willneed(), madvise_cold_page_range(), etc?
>>
>> Yes, I think the same code flow applies to madvise_willneed,
>> madvise_cold_page_range, and similar functions, though my current
>> interest is more on madvise_free.
>>
>> Let me prepare a v2 that includes those as well.
>
> FWIW Dev makes a great point here and I agree wholeheartedly, let's fix all such
> cases...
>
> As an aside, I wonder if we previously didn't do this because we hadn't
> previously exposed the walk_page_range_vma() API or something?
IIRC, yes:
commit e07cda5f232fac4de0925d8a4c92e51e41fa2f6e
Author: David Hildenbrand <david@...hat.com>
Date: Fri Oct 21 12:11:39 2022 +0200
mm/pagewalk: add walk_page_range_vma()
Let's add walk_page_range_vma(), which is similar to walk_page_vma(),
however, is only interested in a subset of the VMA range.
To be used in KSM code to stop using follow_page() next.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists