[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230926052158epcms1p7fd7f3e3f523e5209977d3f5c62e85afa@epcms1p7>
Date: Tue, 26 Sep 2023 14:21:58 +0900
From: Jaeseon Sim <jason.sim@...sung.com>
To: Uladzislau Rezki <urezki@...il.com>
CC: "bhe@...hat.com" <bhe@...hat.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hch@...radead.org" <hch@...radead.org>,
"lstoakes@...il.com" <lstoakes@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jaewon Kim <jaewon31.kim@...sung.com>
Subject: Re: [PATCH] mm/vmalloc: Remove WARN_ON_ONCE related to
adjust_va_to_fit_type
> On Mon, Sep 25, 2023 at 07:51:54PM +0900, Jaeseon Sim wrote:
> > > On 09/22/23 at 05:34pm, Baoquan He wrote:
> > > > Hi Jaeseon,
> > Hello Baoquan,
> > > >
> > > > On 09/22/23 at 03:27pm, Jaeseon Sim wrote:
> > > > > There's panic issue as follows when do alloc_vmap_area:
> > > > >
> > > > > Kernel panic - not syncing: kernel: panic_on_warn set ...
> > > > >
> > > > > page allocation failure: order:0, mode:0x800(GFP_NOWAIT)
> > > > > Call Trace:
> > > > > warn_alloc+0xf4/0x190
> > > > > __alloc_pages_slowpath+0xe0c/0xffc
> > > > > __alloc_pages+0x250/0x2d0
> > > > > new_slab+0x17c/0x4e0
> > > > > ___slab_alloc+0x4e4/0x8a8
> > > > > __slab_alloc+0x34/0x6c
> > > > > kmem_cache_alloc+0x20c/0x2f0
> > > > > adjust_va_to_fit_type
> > > > > __alloc_vmap_area
> > > > > alloc_vmap_area+0x298/0x7fc
> > > > > __get_vm_area_node+0x10c/0x1b4
> > > > > __vmalloc_node_range+0x19c/0x7c0
> >
> > To Uladzislau,
> > Sorry. The path is as below.
> >
> > Call trace:
> > alloc_vmap_area+0x298/0x7fc
> > __get_vm_area_node+0x10c/0x1b4
> > __vmalloc_node_range+0x19c/0x7c0
> > dup_task_struct+0x1b8/0x3b0
> > copy_process+0x170/0xc40
> >
> > > > >
> > > > > Commit 1b23ff80b399 ("mm/vmalloc: invoke classify_va_fit_type() in
> > > > > adjust_va_to_fit_type()") moved classify_va_fit_type() into
> > > > > adjust_va_to_fit_type() and used WARN_ON_ONCE() to handle return
> > > > > value of adjust_va_to_fit_type(), just as classify_va_fit_type()
> > > > > was handled.
> > > >
> > > > I don't get what you are fixing. In commit 1b23ff80b399, we have
> > > ~~ s/In/Before/, typo
> > > > "if (WARN_ON_ONCE(type == NOTHING_FIT))", it's the same as the current
> > > > code. You set panic_on_warn, it will panic in old code before commit
> > > > 1b23ff80b399. Isn't it an expected behaviour?
> > There is a call path which didn't panic in old code, but does on the current.
> >
> > static __always_inline int adjust_va_to_fit_type()
> >
> > } else if (type == NE_FIT_TYPE) {
> > lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
> > if (!lva)
> > return -1;
> >
> >
> We do not have above code anymore:
Sorry, I tried to say it in a simplified way and it caused a misunderstanding.
<snip>
static __always_inline int
adjust_va_to_fit_type(struct rb_root *root, struct list_head *head,
struct vmap_area *va, unsigned long nva_start_addr,
unsigned long size)
} else if (type == NE_FIT_TYPE) {
/*
* Split no edge of fit VA.
*
* | |
* L V NVA V R
* |---|-------|---|
*/
lva = __this_cpu_xchg(ne_fit_preload_node, NULL);
if (unlikely(!lva)) {
/*
* For percpu allocator we do not do any pre-allocation
* and leave it as it is. The reason is it most likely
* never ends up with NE_FIT_TYPE splitting. In case of
* percpu allocations offsets and sizes are aligned to
* fixed align request, i.e. RE_FIT_TYPE and FL_FIT_TYPE
* are its main fitting cases.
*
* There are a few exceptions though, as an example it is
* a first allocation (early boot up) when we have "one"
* big free space that has to be split.
*
* Also we can hit this path in case of regular "vmap"
* allocations, if "this" current CPU was not preloaded.
* See the comment in alloc_vmap_area() why. If so, then
* GFP_NOWAIT is used instead to get an extra object for
* split purpose. That is rare and most time does not
* occur.
*
* What happens if an allocation gets failed. Basically,
* an "overflow" path is triggered to purge lazily freed
* areas to free some memory, then, the "retry" path is
* triggered to repeat one more time. See more details
* in alloc_vmap_area() function.
*/
lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT);
if (!lva)
return -1;
}
<snip>
Above allocation fail will meet WARN_ON_ONCE in the current kernel now.
Should It be handled by alloc_vmap_area()?, as you described in a comment.
Thanks!
Jaeseon
>
> <snip>
> commit 82dd23e84be3ead53b6d584d836f51852d1096e6
> Author: Uladzislau Rezki (Sony) <urezki@...il.com>
> Date: Thu Jul 11 20:58:57 2019 -0700
>
> mm/vmalloc.c: preload a CPU with one object for split purpose
>
> <snip>
>
> Which kernel are you testing?
I'm currently testing v6.1.
The panic occurred during power on/off test.
>
> Thanks!
>
> --
> Uladzislau Rezki
Powered by blists - more mailing lists