[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8tVrOezU2q_0ded@casper.infradead.org>
Date: Fri, 7 Mar 2025 20:23:08 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Sooyong Suk <s.suk@...sung.com>, viro@...iv.linux.org.uk,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
linux-mm@...ck.org, jaewon31.kim@...il.com, spssyr@...il.com
Subject: Re: [RFC PATCH] block, fs: use FOLL_LONGTERM as gup_flags for direct
IO
On Thu, Mar 06, 2025 at 07:26:52AM -0800, Christoph Hellwig wrote:
> On Thu, Mar 06, 2025 at 04:40:56PM +0900, Sooyong Suk wrote:
> > There are GUP references to pages that are serving as direct IO buffers.
> > Those pages can be allocated from CMA pageblocks despite they can be
> > pinned until the DIO is completed.
>
> direct I/O is eactly the case that is not FOLL_LONGTERM and one of
> the reasons to even have the flag. So big fat no to this.
>
> You also completely failed to address the relevant mailinglist and
> maintainers.
You're right; this patch is so bad that it's insulting.
Howver, the problem is real. And the alternative "solution" being
proposed is worse -- reintroducing cleancache and frontswap.
What I've been asking for and don't have the answer to yet is:
- What latency is acceptable to reclaim the pages allocated from CMA
pageblocks?
- Can we afford a TLB shootdown? An rmap walk?
- Is the problem with anonymous or pagecache memory?
I have vaguely been wondering about creating a separate (fake) NUMA node
for the CMA memory so that userspace can control "none of this memory is
in the CMA blocks". But that's not a great solution either.
Powered by blists - more mailing lists