[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aTUqKudyHxilbBpL@casper.infradead.org>
Date: Sun, 7 Dec 2025 07:18:02 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Dominique Martinet <asmadeus@...ewreck.org>
Cc: Christian Schoenebeck <linux_oss@...debyte.com>,
Chris Arges <carges@...udflare.com>,
David Howells <dhowells@...hat.com>, ericvh@...nel.org,
lucho@...kov.net, v9fs@...ts.linux.dev,
linux-kernel@...r.kernel.org, kernel-team@...udflare.com
Subject: Re: kernel BUG when mounting large block xfs backed by 9p (folio ref
count bug)
On Fri, Dec 05, 2025 at 10:48:55PM +0900, Dominique Martinet wrote:
> Your patch will appear to work (folioq path won't go there so the page
> won't be pinned), but I'm not sure just being a folio is enough of a
> guarantee here? for example is a folio coming from page cache
> (e.g. readahead) guaranteed to be stable while it is being read? Can
> something (try to) kill that thread while the IO is in progress and
> reclaim the memory?
In readahead, we allocate a folio, lock it and add it to the page cache.
We then submit it to the filesystem for read. It cannot be truncated
from the page cache until the filesystem unlocks it (generally by calling
folio_end_read() but some filesystems explicitly call folio_unlock()
instead). So you don't need to take an extra reference to it.
Powered by blists - more mailing lists