[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aSMsqD5mB2mHHH9v@kernel.org>
Date: Sun, 23 Nov 2025 17:47:52 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Pasha Tatashin <pasha.tatashin@...een.com>
Cc: pratyush@...nel.org, jasonmiu@...gle.com, graf@...zon.com,
dmatlack@...gle.com, rientjes@...gle.com, corbet@....net,
rdunlap@...radead.org, ilpo.jarvinen@...ux.intel.com,
kanie@...ux.alibaba.com, ojeda@...nel.org, aliceryhl@...gle.com,
masahiroy@...nel.org, akpm@...ux-foundation.org, tj@...nel.org,
yoann.congal@...le.fr, mmaurer@...gle.com, roman.gushchin@...ux.dev,
chenridong@...wei.com, axboe@...nel.dk, mark.rutland@....com,
jannh@...gle.com, vincent.guittot@...aro.org, hannes@...xchg.org,
dan.j.williams@...el.com, david@...hat.com,
joel.granados@...nel.org, rostedt@...dmis.org,
anna.schumaker@...cle.com, song@...nel.org, linux@...ssschuh.net,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-mm@...ck.org, gregkh@...uxfoundation.org, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, rafael@...nel.org, dakr@...nel.org,
bartosz.golaszewski@...aro.org, cw00.choi@...sung.com,
myungjoo.ham@...sung.com, yesanishhere@...il.com,
Jonathan.Cameron@...wei.com, quic_zijuhu@...cinc.com,
aleksander.lobakin@...el.com, ira.weiny@...el.com,
andriy.shevchenko@...ux.intel.com, leon@...nel.org, lukas@...ner.de,
bhelgaas@...gle.com, wagi@...nel.org, djeffery@...hat.com,
stuart.w.hayes@...il.com, ptyadav@...zon.de, lennart@...ttering.net,
brauner@...nel.org, linux-api@...r.kernel.org,
linux-fsdevel@...r.kernel.org, saeedm@...dia.com,
ajayachandra@...dia.com, jgg@...dia.com, parav@...dia.com,
leonro@...dia.com, witu@...dia.com, hughd@...gle.com,
skhawaja@...gle.com, chrisl@...nel.org
Subject: Re: [PATCH v7 14/22] mm: memfd_luo: allow preserving memfd
On Sat, Nov 22, 2025 at 05:23:41PM -0500, Pasha Tatashin wrote:
> From: Pratyush Yadav <ptyadav@...zon.de>
>
> The ability to preserve a memfd allows userspace to use KHO and LUO to
> transfer its memory contents to the next kernel. This is useful in many
> ways. For one, it can be used with IOMMUFD as the backing store for
> IOMMU page tables. Preserving IOMMUFD is essential for performing a
> hypervisor live update with passthrough devices. memfd support provides
> the first building block for making that possible.
>
> For another, applications with a large amount of memory that takes time
> to reconstruct, reboots to consume kernel upgrades can be very
> expensive. memfd with LUO gives those applications reboot-persistent
> memory that they can use to quickly save and reconstruct that state.
>
> While memfd is backed by either hugetlbfs or shmem, currently only
> support on shmem is added. To be more precise, support for anonymous
> shmem files is added.
>
> The handover to the next kernel is not transparent. All the properties
> of the file are not preserved; only its memory contents, position, and
> size. The recreated file gets the UID and GID of the task doing the
> restore, and the task's cgroup gets charged with the memory.
>
> Once preserved, the file cannot grow or shrink, and all its pages are
> pinned to avoid migrations and swapping. The file can still be read from
> or written to.
>
> Use vmalloc to get the buffer to hold the folios, and preserve
> it using kho_preserve_vmalloc(). This doesn't have the size limit.
>
> Signed-off-by: Pratyush Yadav <ptyadav@...zon.de>
> Co-developed-by: Pasha Tatashin <pasha.tatashin@...een.com>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@...een.com>
> ---
...
> +static int memfd_luo_retrieve_folios(struct file *file,
> + struct memfd_luo_folio_ser *folios_ser,
> + u64 nr_folios)
> +{
> + struct inode *inode = file_inode(file);
> + struct address_space *mapping = inode->i_mapping;
> + struct folio *folio;
> + long i = 0;
> + int err;
> +
> + for (; i < nr_folios; i++) {
> + const struct memfd_luo_folio_ser *pfolio = &folios_ser[i];
> + phys_addr_t phys;
> + u64 index;
> + int flags;
> +
> + if (!pfolio->pfn)
> + continue;
> +
> + phys = PFN_PHYS(pfolio->pfn);
> + folio = kho_restore_folio(phys);
> + if (!folio) {
> + pr_err("Unable to restore folio at physical address: %llx\n",
> + phys);
> + goto put_folios;
> + }
> + index = pfolio->index;
> + flags = pfolio->flags;
> +
> + /* Set up the folio for insertion. */
> + __folio_set_locked(folio);
> + __folio_set_swapbacked(folio);
> +
> + err = mem_cgroup_charge(folio, NULL, mapping_gfp_mask(mapping));
> + if (err) {
> + pr_err("shmem: failed to charge folio index %ld: %d\n",
> + i, err);
> + goto unlock_folio;
> + }
> +
> + err = shmem_add_to_page_cache(folio, mapping, index, NULL,
> + mapping_gfp_mask(mapping));
> + if (err) {
> + pr_err("shmem: failed to add to page cache folio index %ld: %d\n",
> + i, err);
> + goto unlock_folio;
> + }
> +
> + if (flags & MEMFD_LUO_FOLIO_UPTODATE)
> + folio_mark_uptodate(folio);
> + if (flags & MEMFD_LUO_FOLIO_DIRTY)
> + folio_mark_dirty(folio);
> +
> + err = shmem_inode_acct_blocks(inode, 1);
> + if (err) {
> + pr_err("shmem: failed to account folio index %ld: %d\n",
> + i, err);
> + goto unlock_folio;
> + }
> +
> + shmem_recalc_inode(inode, 1, 0);
> + folio_add_lru(folio);
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return 0;
> +
> +unlock_folio:
> + folio_unlock(folio);
> + folio_put(folio);
> + i++;
I'd add a counter and use it int the below for loop.
> +put_folios:
> + /*
> + * Note: don't free the folios already added to the file. They will be
> + * freed when the file is freed. Free the ones not added yet here.
> + */
> + for (; i < nr_folios; i++) {
> + const struct memfd_luo_folio_ser *pfolio = &folios_ser[i];
> +
> + folio = kho_restore_folio(pfolio->pfn);
> + if (folio)
> + folio_put(folio);
> + }
> +
> + return err;
> +}
Reviewed-by: Mike Rapoport (Microsoft) <rppt@...nel.org>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists