[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d2103b59-5cff-48a8-9eb8-ff9498dbde5e@linux.dev>
Date: Thu, 30 Oct 2025 10:29:33 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: Zi Yan <ziy@...dia.com>
Cc: kernel@...kajraghav.com, akpm@...ux-foundation.org, mcgrof@...nel.org,
nao.horiguchi@...il.com, jane.chu@...cle.com,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
linmiaohe@...wei.com, Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>, Barry Song <baohua@...nel.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Wei Yang <richard.weiyang@...il.com>, Yang Shi <shy828301@...il.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, david@...hat.com
Subject: Re: [PATCH v4 2/3] mm/memory-failure: improve large block size folio
handling.
On 2025/10/30 09:40, Zi Yan wrote:
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@...cle.com>
> Signed-off-by: Zi Yan <ziy@...dia.com>
> Reviewed-by: Luis Chamberlain <mcgrof@...nel.org>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
LGTM! Feel free to add:
Reviewed-by: Lance Yang <lance.yang@...ux.dev>
> mm/memory-failure.c | 31 +++++++++++++++++++++++++++----
> 1 file changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..acc35c881547 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + const int new_order = min_order_for_split(folio);
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,16 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If splitting a folio to order-0 fails, kill the process.
> + * Split the folio regardless to minimize unusable pages.
> + * Because the memory failure code cannot handle large
> + * folios, this split is always treated as if it failed.
> + */
> + if (err || new_order) {
> + /* get folio again in case the original one is split */
> + folio = page_folio(p);
> res = -EHWPOISON;
> kill_procs_now(p, pfn, flags, folio);
> put_page(p);
> @@ -2621,7 +2634,17 @@ static int soft_offline_in_use_page(struct page *page)
> };
>
> if (!huge && folio_test_large(folio)) {
> - if (try_to_split_thp_page(page, true)) {
> + const int new_order = min_order_for_split(folio);
> +
> + /*
> + * If new_order (target split order) is not 0, do not split the
> + * folio at all to retain the still accessible large folio.
> + * NOTE: if minimizing the number of soft offline pages is
> + * preferred, split it to non-zero new_order like it is done in
> + * memory_failure().
> + */
> + if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
> + /* release= */ true)) {
> pr_info("%#lx: thp split failed\n", pfn);
> return -EBUSY;
> }
Powered by blists - more mailing lists