[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnqUF14C_2RH71wu@pc636>
Date: Tue, 25 Jun 2024 11:55:35 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Hailong Liu <hailong.liu@...o.com>, Baoquan He <bhe@...hat.com>
Cc: Uladzislau Rezki <urezki@...il.com>, Baoquan He <bhe@...hat.com>,
Nick Bowler <nbowler@...conx.ca>, linux-kernel@...r.kernel.org,
Linux regressions mailing list <regressions@...ts.linux.dev>,
linux-mm@...ck.org, sparclinux@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: PROBLEM: kernel crashes when running xfsdump since ~6.4
On Tue, Jun 25, 2024 at 05:26:01PM +0800, Hailong Liu wrote:
> On Mon, 24. Jun 14:18, Uladzislau Rezki wrote:
> > >
> > > IMO, I thought we can fix this by following.
> > > It doesn't initialize unused variables and utilize the percpu xarray. If I said
> > > anything wrong, please do let me know. I can learn a lot from you all :).
> > >
> > >
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 11fe5ea208aa..f9f981674b2d 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -4480,17 +4480,21 @@ void __init vmalloc_init(void)
> > > */
> > > vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
> > >
> > > - for_each_possible_cpu(i) {
> > > + for (i = 0; i < nr_cpu_ids; i++) {
> > > struct vmap_block_queue *vbq;
> > > struct vfree_deferred *p;
> > >
> > > vbq = &per_cpu(vmap_block_queue, i);
> > > + xa_init(&vbq->vmap_blocks);
> > > +
> > > + if (!cpu_possible(i))
> > Why do you need such check?
> IIUC, take this issue as example, cpumask is b101 and nr_cpu_id is
> 3, if i = 1, There is no need to initialize unused variables here;
> initializing the xarray for the hash index is sufficient.
>
But. It does not make much sense to skip initialization or keep it "half"
initialized. At least we can guarantee that all data structures are properly
setup.
One concern is what Baoquan raised about per-cpu variables. In your
scenario, b101, accessing to second per-cpu variable is not allowed.
So, we need properly check that concern.
--
Uladzislau Rezki
Powered by blists - more mailing lists