[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYg1beWy7g-5iyxwHOyeUw43MO6Yvnqr+ZqSsnQRRJ-SQ@mail.gmail.com>
Date: Tue, 11 Jun 2024 23:52:37 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand <david@...hat.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH unstable] mm: rmap: abstract updating per-node and
per-memcg stats fix
On Tue, Jun 11, 2024 at 10:10 PM Hugh Dickins <hughd@...gle.com> wrote:
>
> /proc/meminfo is showing ridiculously large numbers on some lines:
> __folio_remove_rmap()'s __folio_mod_stat() should be subtracting!
>
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
Reviewed-by: Yosry Ahmed <yosryahmed@...gle.com>
Thanks a lot for fixing this! I was just looking at a test failure
report by the kernel robot caused by this [1].
Just to document my own stupidity here:
1. In [2], I sent a fix to use __mod_node_page_state() instead of
__lruvec_stat_mod_folio() in __folio_remove_rmap(). I made the same
mistake of replacing subtraction with addition.
2. In [3], I sent a v2 of that fix that kept the subtraction in
__folio_remove_rmap() correctly.
3. In [4], I sent a cleanup on top of the fix, and that cleanup
replaced the subtraction in __folio_remove_rmap() with an addition,
again.
Apparently, I just suck at subtraction :)
[1]https://lore.kernel.org/linux-mm/202406121026.579593f2-oliver.sang@intel.com/
[2]https://lore.kernel.org/lkml/20240506170024.202111-1-yosryahmed@google.com/
[3]https://lore.kernel.org/lkml/20240506192924.271999-1-yosryahmed@google.com/
[4]https://lore.kernel.org/lkml/20240506211333.346605-1-yosryahmed@google.com/
> ---
> A fix for folding into mm-unstable, not needed for 6.10-rc.
>
> mm/rmap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1567,7 +1567,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
> list_empty(&folio->_deferred_list))
> deferred_split_folio(folio);
> }
> - __folio_mod_stat(folio, nr, nr_pmdmapped);
> + __folio_mod_stat(folio, -nr, -nr_pmdmapped);
>
> /*
> * It would be tidy to reset folio_test_anon mapping when fully
> --
> 2.35.3
>
Powered by blists - more mailing lists