[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ad34b69-2fb4-770b-14e5-bea13cf63d2f@huawei.com>
Date: Mon, 9 Feb 2026 19:54:21 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Jiaqi Yan <jiaqiyan@...gle.com>
CC: <nao.horiguchi@...il.com>, <tony.luck@...el.com>,
<wangkefeng.wang@...wei.com>, <willy@...radead.org>,
<akpm@...ux-foundation.org>, <osalvador@...e.de>, <rientjes@...gle.com>,
<duenwen@...gle.com>, <jthoughton@...gle.com>, <jgg@...dia.com>,
<ankita@...dia.com>, <peterx@...hat.com>, <sidhartha.kumar@...cle.com>,
<ziy@...dia.com>, <david@...hat.com>, <dave.hansen@...ux.intel.com>,
<muchun.song@...ux.dev>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <linux-fsdevel@...r.kernel.org>,
<william.roche@...cle.com>, <harry.yoo@...cle.com>, <jane.chu@...cle.com>
Subject: Re: [PATCH v3 1/3] mm: memfd/hugetlb: introduce memfd-based userspace
MFR policy
On 2026/2/4 3:23, Jiaqi Yan wrote:
> Sometimes immediately hard offlining a large chunk of contigous memory
> having uncorrected memory errors (UE) may not be the best option.
> Cloud providers usually serve capacity- and performance-critical guest
> memory with 1G HugeTLB hugepages, as this significantly reduces the
> overhead associated with managing page tables and TLB misses. However,
> for today's HugeTLB system, once a byte of memory in a hugepage is
> hardware corrupted, the kernel discards the whole hugepage, including
> the healthy portion. Customer workload running in the VM can hardly
> recover from such a great loss of memory.
Thanks for your patch. Some questions below.
>
> Therefore keeping or discarding a large chunk of contiguous memory
> owned by userspace (particularly to serve guest memory) due to
> recoverable UE may better be controlled by userspace process
> that owns the memory, e.g. VMM in the Cloud environment.
>
> Introduce a memfd-based userspace memory failure (MFR) policy,
> MFD_MF_KEEP_UE_MAPPED. It is possible to support for other memfd,
> but the current implementation only covers HugeTLB.
>
> For a hugepage associated with MFD_MF_KEEP_UE_MAPPED enabled memfd,
> whenever it runs into a new UE,
>
> * MFR defers hard offline operations, i.e., unmapping and
So the folio can't be unpoisoned until hugetlb folio becomes free?
> dissolving. MFR still sets HWPoison flag, holds a refcount
> for every raw HWPoison page, record them in a list, sends SIGBUS
> to the consuming thread, but si_addr_lsb is reduced to PAGE_SHIFT.
> If userspace is able to handle the SIGBUS, the HWPoison hugepage
> remains accessible via the mapping created with that memfd.
>
> * If the memory was not faulted in yet, the fault handler also
> allows fault in the HWPoison folio.
>
> For a MFD_MF_KEEP_UE_MAPPED enabled memfd, when it is closed, or
> when userspace process truncates its hugepages:
>
> * When the HugeTLB in-memory file system removes the filemap's
> folios one by one, it asks MFR to deal with HWPoison folios
> on the fly, implemented by filemap_offline_hwpoison_folio().
>
> * MFR drops the refcounts being held for the raw HWPoison
> pages within the folio. Now that the HWPoison folio becomes
> free, MFR dissolves it into a set of raw pages. The healthy pages
> are recycled into buddy allocator, while the HWPoison ones are
> prevented from re-allocation.
>
...
>
> +static void filemap_offline_hwpoison_folio_hugetlb(struct folio *folio)
> +{
> + int ret;
> + struct llist_node *head;
> + struct raw_hwp_page *curr, *next;
> +
> + /*
> + * Since folio is still in the folio_batch, drop the refcount
> + * elevated by filemap_get_folios.
> + */
> + folio_put_refs(folio, 1);
> + head = llist_del_all(raw_hwp_list_head(folio));
We might race with get_huge_page_for_hwpoison()? llist_add() might be called
by folio_set_hugetlb_hwpoison() just after llist_del_all()?
> +
> + /*
> + * Release refcounts held by try_memory_failure_hugetlb, one per
> + * HWPoison-ed page in the raw hwp list.
> + *
> + * Set HWPoison flag on each page so that free_has_hwpoisoned()
> + * can exclude them during dissolve_free_hugetlb_folio().
> + */
> + llist_for_each_entry_safe(curr, next, head, node) {
> + folio_put(folio);
The hugetlb folio refcnt will only be increased once even if it contains multiple UE sub-pages.
See __get_huge_page_for_hwpoison() for details. So folio_put() might be called more times than
folio_try_get() in __get_huge_page_for_hwpoison().
> + SetPageHWPoison(curr->page);
If hugetlb folio vmemmap is optimized, I think SetPageHWPoison might trigger BUG.
> + kfree(curr);
> + }
Above logic is almost same as folio_clear_hugetlb_hwpoison. Maybe we can reuse that?
> +
> + /* Refcount now should be zero and ready to dissolve folio. */
> + ret = dissolve_free_hugetlb_folio(folio);
> + if (ret)
> + pr_err("failed to dissolve hugetlb folio: %d\n", ret);
> +}
> +
Thanks.
.
Powered by blists - more mailing lists