[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5a047afd-1e9f-4e68-8c8a-d5b0b5506bb3@suse.cz>
Date: Tue, 10 Dec 2024 09:29:59 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Shakeel Butt <shakeel.butt@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Yosry Ahmed <yosryahmed@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH] memcg: slub: fix SUnreclaim for post charged objects
On 12/10/24 05:06, Shakeel Butt wrote:
> Large kmalloc directly allocates from the page allocator and then use
> lruvec_stat_mod_folio() to increment the unreclaimable slab stats for
> global and memcg. However when post memcg charging of slab objects was
> added in commit 9028cdeb38e1 ("memcg: add charging of already allocated
> slab objects"), it missed to correctly handle the unreclaimable slab
> stats for memcg.
>
> One user visisble effect of that bug is that the node level
> unreclaimable slab stat will work correctly but the memcg level stat can
> underflow as kernel correctly handles the free path but the charge path
> missed to increment the memcg level unreclaimable slab stat. Let's fix
> by correctly handle in the post charge code path.
>
> Fixes: 9028cdeb38e1 ("memcg: add charging of already allocated slab objects")
That's a 6.12-rc1 commit so I'm adding cc stable.
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
Queued in slab/for-next-fixes, thanks!
Vlastimil
> ---
> mm/slub.c | 21 ++++++++++++++++++---
> 1 file changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index f62c829b7b6b..88bf2bf51bd6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2189,9 +2189,24 @@ bool memcg_slab_post_charge(void *p, gfp_t flags)
>
> folio = virt_to_folio(p);
> if (!folio_test_slab(folio)) {
> - return folio_memcg_kmem(folio) ||
> - (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
> - folio_order(folio)) == 0);
> + int size;
> +
> + if (folio_memcg_kmem(folio))
> + return true;
> +
> + if (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
> + folio_order(folio)))
> + return false;
> +
> + /*
> + * This folio has already been accounted in the global stats but
> + * not in the memcg stats. So, subtract from the global and use
> + * the interface which adds to both global and memcg stats.
> + */
> + size = folio_size(folio);
> + node_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -size);
> + lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, size);
> + return true;
> }
>
> slab = folio_slab(folio);
Powered by blists - more mailing lists