[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac5d6544-32cb-4ae1-a62a-7720b67b4042@suse.cz>
Date: Tue, 1 Apr 2025 09:53:05 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Cc: bpf@...r.kernel.org, daniel@...earbox.net, andrii@...nel.org,
martin.lau@...nel.org, akpm@...ux-foundation.org, peterz@...radead.org,
bigeasy@...utronix.de, rostedt@...dmis.org, shakeel.butt@...ux.dev,
mhocko@...e.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/page_alloc: Fix try_alloc_pages
On 4/1/25 05:23, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@...nel.org>
>
> Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
>
> Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Acked-by: Vlastimil BAbka <vbabka@...e.cz>
> ---
>
> As soon as I fast forwarded and rerun the tests the bug was
> seen immediately.
> I'm completely baffled how I managed to lose this hunk.
I think the earlier versions were done on older base than v6.14-rc1 which
acquired efabfe1420f5 ("mm/page_alloc: move set_page_refcounted() to callers
of get_page_from_freelist()")
> I'm pretty sure I manually tested various code paths of
> trylock logic with CONFIG_DEBUG_VM=y.
> Pure incompetence :(
> Shame.
> ---
> mm/page_alloc.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ffbb5678bc2f..c0bcfe9d0dd9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7248,6 +7248,9 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order)
>
> /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
>
> + if (page)
> + set_page_refcounted(page);
Note for the later try-kmalloc integration, slab uses frozen pages now, so
we'll need to split out a frozen variant of this API.
But this is ok as a bugfix for now.
> +
> if (memcg_kmem_online() && page &&
> unlikely(__memcg_kmem_charge_page(page, alloc_gfp, order) != 0)) {
> free_pages_nolock(page, order);
Powered by blists - more mailing lists