[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAK1f24kwVP2SG2B5WFcHpRkA8fa_fdcEMD8i3N3e-vw9YPabEg@mail.gmail.com>
Date: Mon, 22 Apr 2024 15:58:03 +0800
From: Lance Yang <ioworker0@...il.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, david@...hat.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: huge_memory: add the missing folio_test_pmd_mappable()
for THP split statistics
On Mon, Apr 22, 2024 at 3:33 PM Baolin Wang
<baolin.wang@...ux.alibaba.com> wrote:
>
> Now the mTHP can also be split or added into the deferred list, so add
> folio_test_pmd_mappable() validation for PMD mapped THP, to avoid confusion
> with PMD mapped THP related statistics.
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Acked-by: David Hildenbrand <david@...hat.com>
LGTM!
Reviewed-by: Lance Yang <ioworker0@...il.com>
Thanks,
Lance
> ---
> Changes from v1:
> - Add acked tag from David.
> - Check the THP earlier in case the folio is split per Lance.
> ---
> mm/huge_memory.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 716d29c21b6e..a9789ca823bc 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2994,6 +2994,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order);
> struct anon_vma *anon_vma = NULL;
> struct address_space *mapping = NULL;
> + bool is_thp = folio_test_pmd_mappable(folio);
> int extra_pins, ret;
> pgoff_t end;
> bool is_hzp;
> @@ -3172,7 +3173,8 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> i_mmap_unlock_read(mapping);
> out:
> xas_destroy(&xas);
> - count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED);
> + if (is_thp)
> + count_vm_event(!ret ? THP_SPLIT_PAGE : THP_SPLIT_PAGE_FAILED);
> return ret;
> }
>
> @@ -3234,7 +3236,8 @@ void deferred_split_folio(struct folio *folio)
>
> spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
> if (list_empty(&folio->_deferred_list)) {
> - count_vm_event(THP_DEFERRED_SPLIT_PAGE);
> + if (folio_test_pmd_mappable(folio))
> + count_vm_event(THP_DEFERRED_SPLIT_PAGE);
> list_add_tail(&folio->_deferred_list, &ds_queue->split_queue);
> ds_queue->split_queue_len++;
> #ifdef CONFIG_MEMCG
> --
> 2.39.3
>
Powered by blists - more mailing lists