[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZPmesS66PTl+1Mdz@MiWiFi-R3L-srv>
Date: Thu, 7 Sep 2023 17:58:09 +0800
From: Baoquan He <bhe@...hat.com>
To: Uladzislau Rezki <urezki@...il.com>, k-hagio-ab@....com
Cc: lijiang@...hat.com, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Lorenzo Stoakes <lstoakes@...il.com>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Dave Chinner <david@...morbit.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
kexec@...ts.infradead.org
Subject: Re: [PATCH v2 4/9] mm: vmalloc: Remove global vmap_area_root rb-tree
On 09/07/23 at 11:39am, Uladzislau Rezki wrote:
> On Thu, Sep 07, 2023 at 10:17:39AM +0800, Baoquan He wrote:
> > Add Kazu and Lianbo to CC, and kexec mailing list
> >
> > On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote:
> > > Store allocated objects in a separate nodes. A va->va_start
> > > address is converted into a correct node where it should
> > > be placed and resided. An addr_to_node() function is used
> > > to do a proper address conversion to determine a node that
> > > contains a VA.
> > >
> > > Such approach balances VAs across nodes as a result an access
> > > becomes scalable. Number of nodes in a system depends on number
> > > of CPUs divided by two. The density factor in this case is 1/2.
> > >
> > > Please note:
> > >
> > > 1. As of now allocated VAs are bound to a node-0. It means the
> > > patch does not give any difference comparing with a current
> > > behavior;
> > >
> > > 2. The global vmap_area_lock, vmap_area_root are removed as there
> > > is no need in it anymore. The vmap_area_list is still kept and
> > > is _empty_. It is exported for a kexec only;
> >
> > I haven't taken a test, while accessing all nodes' busy tree to get
> > va of the lowest address could severely impact kcore reading efficiency
> > on system with many vmap nodes. People doing live debugging via
> > /proc/kcore will get a little surprise.
> >
> >
> > Empty vmap_area_list will break makedumpfile utility, Crash utility
> > could be impactd too. I checked makedumpfile code, it relys on
> > vmap_area_list to deduce the vmalloc_start value.
> >
> It is left part and i hope i fix it in v3. The problem here is
> we can not give an opportunity to access to vmap internals from
> outside. This is just not correct, i.e. you are not allowed to
> access the list directly.
Right. Thanks for the fix in v3, that is a relief of makedumpfile and
crash.
Hi Kazu,
Meanwhile, I am thinking if we should evaluate the necessity of
vmap_area_list in makedumpfile and Crash. In makedumpfile, we just use
vmap_area_list to deduce VMALLOC_START. Wondering if we can export
VMALLOC_START directly. Surely, the lowest va->va_start in vmap_area_list
is a tighter low boundary of vmalloc area and can reduce unnecessary
scanning below the lowest va. Not sure if this is the reason people
decided to export vmap_area_list.
Thanks
Baoquan
Powered by blists - more mailing lists