[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f994665e-9fe1-4aaa-9760-07f876441a64@redhat.com>
Date: Wed, 8 Nov 2023 18:38:43 +0100
From: David Hildenbrand <david@...hat.com>
To: Stefan Roesch <shr@...kernel.io>, kernel-team@...com
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, riel@...riel.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
stable@...r.kernel.org, willy@...radead.org
Subject: Re: [PATCH v3] mm: Fix for negative counter: nr_file_hugepages
On 08.11.23 18:15, Stefan Roesch wrote:
> While qualifiying the 6.4 release, the following warning was detected in
> messages:
>
> vmstat_refresh: nr_file_hugepages -15664
>
> The warning is caused by the incorrect updating of the NR_FILE_THPS
> counter in the function split_huge_page_to_list. The if case is checking
> for folio_test_swapbacked, but the else case is missing the check for
> folio_test_pmd_mappable. The other functions that manipulate the counter
> like __filemap_add_folio and filemap_unaccount_folio have the
> corresponding check.
>
> I have a test case, which reproduces the problem. It can be found here:
> https://github.com/sroeschus/testcase/blob/main/vmstat_refresh/madv.c
>
> The test case reproduces on an XFS filesystem. Running the same test
> case on a BTRFS filesystem does not reproduce the problem.
>
> AFAIK version 6.1 until 6.6 are affected by this problem.
>
> Signed-off-by: Stefan Roesch <shr@...kernel.io>
> Co-debugged-by: Johannes Weiner <hannes@...xchg.org>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
> Cc: stable@...r.kernel.org
> ---
> mm/huge_memory.c | 16 +++++++++-------
> 1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 064fbd90822b4..874000f97bfc1 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2737,13 +2737,15 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> int nr = folio_nr_pages(folio);
>
> xas_split(&xas, folio, folio_order(folio));
> - if (folio_test_swapbacked(folio)) {
> - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS,
> - -nr);
> - } else {
> - __lruvec_stat_mod_folio(folio, NR_FILE_THPS,
> - -nr);
> - filemap_nr_thps_dec(mapping);
> + if (folio_test_pmd_mappable(folio)) {
> + if (folio_test_swapbacked(folio)) {
> + __lruvec_stat_mod_folio(folio,
> + NR_SHMEM_THPS, -nr);
> + } else {
> + __lruvec_stat_mod_folio(folio,
> + NR_FILE_THPS, -nr);
> + filemap_nr_thps_dec(mapping);
> + }
Reviewed-by: David Hildenbrand <david@...hat.com>
Yes, that's the current state: update these counters only for
(traditional, IOW PMD-sized) THP. What we'll do with non-pmd-sized THP
remains to be discussed (Ryan had some ideas).
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists