[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQ+V_RAMfM9GLfMq4pyAM6xaSnUQ2sqS0oisDZmaWvC5Uw@mail.gmail.com>
Date: Tue, 1 Apr 2025 10:56:02 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>, bpf <bpf@...r.kernel.org>,
Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>, Sebastian Sewior <bigeasy@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>, Shakeel Butt <shakeel.butt@...ux.dev>,
Michal Hocko <mhocko@...e.com>, linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/page_alloc: Fix try_alloc_pages
On Tue, Apr 1, 2025 at 12:53 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 4/1/25 05:23, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@...nel.org>
> >
> > Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
> >
> > Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> > Signed-off-by: Alexei Starovoitov <ast@...nel.org>
>
> Acked-by: Vlastimil BAbka <vbabka@...e.cz>
>
> > ---
> >
> > As soon as I fast forwarded and rerun the tests the bug was
> > seen immediately.
> > I'm completely baffled how I managed to lose this hunk.
>
> I think the earlier versions were done on older base than v6.14-rc1 which
> acquired efabfe1420f5 ("mm/page_alloc: move set_page_refcounted() to callers
> of get_page_from_freelist()")
ohh. Thanks.
Still, I have no excuse for not doing full integration testing.
I will learn this hard lesson.
> > I'm pretty sure I manually tested various code paths of
> > trylock logic with CONFIG_DEBUG_VM=y.
> > Pure incompetence :(
> > Shame.
> > ---
> > mm/page_alloc.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index ffbb5678bc2f..c0bcfe9d0dd9 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -7248,6 +7248,9 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order)
> >
> > /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
> >
> > + if (page)
> > + set_page_refcounted(page);
>
> Note for the later try-kmalloc integration, slab uses frozen pages now, so
> we'll need to split out a frozen variant of this API.
Thanks for the heads up.
> But this is ok as a bugfix for now.
>
> > +
> > if (memcg_kmem_online() && page &&
> > unlikely(__memcg_kmem_charge_page(page, alloc_gfp, order) != 0)) {
> > free_pages_nolock(page, order);
>
Powered by blists - more mailing lists