[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46960f37-0d12-4cfd-a214-1ddae2495665@redhat.com>
Date: Wed, 5 Mar 2025 20:19:41 +0100
From: David Hildenbrand <david@...hat.com>
To: Matthew Wilcox <willy@...radead.org>, SeongJae Park <sj@...nel.org>
Cc: "Liam R. Howlett" <howlett@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Shakeel Butt <shakeel.butt@...ux.dev>, Vlastimil Babka <vbabka@...e.cz>,
kernel-team@...a.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH 00/16] mm/madvise: batch tlb flushes for MADV_DONTNEED
and MADV_FREE
On 05.03.25 19:56, Matthew Wilcox wrote:
> On Wed, Mar 05, 2025 at 10:15:55AM -0800, SeongJae Park wrote:
>> For MADV_DONTNEED[_LOCKED] or MADV_FREE madvise requests, tlb flushes
>> can happen for each vma of the given address ranges. Because such tlb
>> flushes are for address ranges of same process, doing those in a batch
>> is more efficient while still being safe. Modify madvise() and
>> process_madvise() entry level code path to do such batched tlb flushes,
>> while the internal unmap logics do only gathering of the tlb entries to
>> flush.
>
> Do real applications actually do madvise requests that span multiple
> VMAs? It just seems weird to me. Like, each vma comes from a separate
> call to mmap [1], so why would it make sense for an application to
> call madvise() across a VMA boundary?
I had the same question. If this happens in an app, I would assume that
a single MADV_DONTNEED call would usually not span multiples VMAs, and
if it does, not that many (and that often) that we would really care
about it.
OTOH, optimizing tlb flushing when using a vectored MADV_DONTNEED
version would make more sense to me. I don't recall if process_madvise()
allows for that already, and if it does, is this series primarily
tackling optimizing that?
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists