[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2vxzikd4hvf7.fsf@kernel.org>
Date: Wed, 14 Jan 2026 18:59:56 +0000
From: Pratyush Yadav <pratyush@...nel.org>
To: Mike Rapoport <rppt@...nel.org>
Cc: Chris Mason <clm@...a.com>, Pratyush Yadav <pratyush@...nel.org>,
Pasha Tatashin <pasha.tatashin@...een.com>, jasonmiu@...gle.com,
graf@...zon.com, dmatlack@...gle.com, rientjes@...gle.com,
corbet@....net, rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com,
kanie@...ux.alibaba.com, ojeda@...nel.org, aliceryhl@...gle.com,
masahiroy@...nel.org, akpm@...ux-foundation.org, tj@...nel.org,
yoann.congal@...le.fr, mmaurer@...gle.com, roman.gushchin@...ux.dev,
chenridong@...wei.com, axboe@...nel.dk, mark.rutland@....com,
jannh@...gle.com, vincent.guittot@...aro.org, hannes@...xchg.org,
dan.j.williams@...el.com, david@...hat.com, joel.granados@...nel.org,
rostedt@...dmis.org, anna.schumaker@...cle.com, song@...nel.org,
linux@...ssschuh.net, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
gregkh@...uxfoundation.org, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, dave.hansen@...ux.intel.com, x86@...nel.org,
hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com,
Jonathan.Cameron@...wei.com, quic_zijuhu@...cinc.com,
aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com, lennart@...ttering.net, brauner@...nel.org,
linux-api@...r.kernel.org, linux-fsdevel@...r.kernel.org,
saeedm@...dia.com, ajayachandra@...dia.com, jgg@...dia.com,
parav@...dia.com, leonro@...dia.com, witu@...dia.com,
hughd@...gle.com, skhawaja@...gle.com, chrisl@...nel.org
Subject: Re: [PATCH v8 14/18] mm: memfd_luo: allow preserving memfd
On Wed, Jan 14 2026, Mike Rapoport wrote:
> On Tue, Jan 13, 2026 at 06:09:23AM -0800, Chris Mason wrote:
>> On Tue, 25 Nov 2025 11:58:44 -0500 Pasha Tatashin <pasha.tatashin@...een.com> wrote:
>>
>> Hi everyone,
>>
>> I'm running the v6.19 mm commits through patch review automation, and this
>> commit was flagged. I don't know this code well, but it looks like it
>> might be a real bug.
>>
>> [AI review output below]
>>
>> > diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
>> > new file mode 100644
>> > index 0000000000000..4f6ba63b43105
>> > --- /dev/null
>> > +++ b/mm/memfd_luo.c
>>
>> [ ... ]
>>
>> > +static int memfd_luo_retrieve_folios(struct file *file,
>> > + struct memfd_luo_folio_ser *folios_ser,
>> > + u64 nr_folios)
>> > +{
>> > + struct inode *inode = file_inode(file);
>> > + struct address_space *mapping = inode->i_mapping;
>> > + struct folio *folio;
>> > + int err = -EIO;
>> > + long i;
>> > +
>> > + for (i = 0; i < nr_folios; i++) {
>>
>> [ ... ]
>>
>> > + err = shmem_add_to_page_cache(folio, mapping, index, NULL,
>> > + mapping_gfp_mask(mapping));
>> > + if (err) {
>> > + pr_err("shmem: failed to add to page cache folio index %ld: %d\n",
>> > + i, err);
>> > + goto unlock_folio;
>> > + }
>> > +
>> > + if (flags & MEMFD_LUO_FOLIO_UPTODATE)
>> > + folio_mark_uptodate(folio);
>> > + if (flags & MEMFD_LUO_FOLIO_DIRTY)
>> > + folio_mark_dirty(folio);
>> > +
>> > + err = shmem_inode_acct_blocks(inode, 1);
>> > + if (err) {
>> > + pr_err("shmem: failed to account folio index %ld: %d\n",
>> > + i, err);
>> > + goto unlock_folio;
>> > + }
>>
>> When shmem_inode_acct_blocks() fails here, the folio has already been
>> added to the page cache by shmem_add_to_page_cache(). Should the folio be
>> removed from the page cache before going to unlock_folio?
>>
>> Looking at shmem_alloc_and_add_folio() in mm/shmem.c, when
>> shmem_inode_acct_blocks() fails after the folio has been added, it calls
>> filemap_remove_folio() to remove it:
>>
>> error = shmem_inode_acct_blocks(inode, pages);
>> if (error) {
>> ...
>> if (error) {
>> filemap_remove_folio(folio);
>> goto unlock;
>> }
>> }
>>
>> Without this, the folio remains in the page cache (counted in
>> mapping->nrpages) but info->alloced is not incremented (since
>> shmem_recalc_inode is not called). This could cause shmem accounting
>> inconsistency.
>
> My understanding that if anything fails in memfd_luo_retrieve_folios() the
> file is destroyed anyway and the accounting wouldn't matter.
>
> But to be on the safe side we should fix the error handling here.
> @Pratyush, what do you say?
Yeah, I don't think the inode's alloced accounting is a real issue here
since the file will be destroyed immediately after. This is why I didn't
want to add the extra complexity of the error handling.
But now that I think of it, perhaps the lingering unaccounted folio
might cause an underflow in vm_committed_as. shmem_inode_acct_blocks()
cleans up the vm_acct_memory() call in case of failure. But perhaps the
iput() triggers an extra shmem_unacct_memory() because of the lingering
folio.
I am not 100% sure that can actually happen since the code is a bit
complex. Let me check and get back to you.
--
Regards,
Pratyush Yadav
Powered by blists - more mailing lists