[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8ieZVFEa6vAouvu@casper.infradead.org>
Date: Wed, 5 Mar 2025 18:56:37 +0000
From: Matthew Wilcox <willy@...radead.org>
To: SeongJae Park <sj@...nel.org>
Cc: "Liam R. Howlett" <howlett@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>, kernel-team@...a.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH 00/16] mm/madvise: batch tlb flushes for
MADV_DONTNEED and MADV_FREE
On Wed, Mar 05, 2025 at 10:15:55AM -0800, SeongJae Park wrote:
> For MADV_DONTNEED[_LOCKED] or MADV_FREE madvise requests, tlb flushes
> can happen for each vma of the given address ranges. Because such tlb
> flushes are for address ranges of same process, doing those in a batch
> is more efficient while still being safe. Modify madvise() and
> process_madvise() entry level code path to do such batched tlb flushes,
> while the internal unmap logics do only gathering of the tlb entries to
> flush.
Do real applications actually do madvise requests that span multiple
VMAs? It just seems weird to me. Like, each vma comes from a separate
call to mmap [1], so why would it make sense for an application to
call madvise() across a VMA boundary?
[1] Yes, I know we sometimes merge and/or split VMAs
Powered by blists - more mailing lists