[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQBl66DDGL2FspV8@MiWiFi-R3L-srv>
Date: Tue, 12 Sep 2023 21:21:47 +0800
From: Baoquan He <bhe@...hat.com>
To: Uladzislau Rezki <urezki@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Lorenzo Stoakes <lstoakes@...il.com>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Dave Chinner <david@...morbit.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>
Subject: Re: [PATCH v2 6/9] mm: vmalloc: Offload free_vmap_area_lock lock
On 09/11/23 at 07:10pm, Uladzislau Rezki wrote:
> On Mon, Sep 11, 2023 at 11:25:01AM +0800, Baoquan He wrote:
> > On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote:
> > > Concurrent access to a global vmap space is a bottle-neck.
> > > We can simulate a high contention by running a vmalloc test
> > > suite.
> > >
> > > To address it, introduce an effective vmap node logic. Each
> > > node behaves as independent entity. When a node is accessed
> > > it serves a request directly(if possible) also it can fetch
> > > a new block from a global heap to its internals if no space
> > > or low capacity is left.
> > >
> > > This technique reduces a pressure on the global vmap lock.
> > >
> > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
> > > ---
> > > mm/vmalloc.c | 316 +++++++++++++++++++++++++++++++++++++++++++++------
> > > 1 file changed, 279 insertions(+), 37 deletions(-)
> > >
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 5a8a9c1370b6..4fd4915c532d 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -779,6 +779,7 @@ struct rb_list {
> > >
> > > struct vmap_node {
> > > /* Bookkeeping data of this node. */
> > > + struct rb_list free;
> > > struct rb_list busy;
> > > struct rb_list lazy;
> > >
> > > @@ -786,6 +787,13 @@ struct vmap_node {
> > > * Ready-to-free areas.
> > > */
> > > struct list_head purge_list;
> > > + struct work_struct purge_work;
> > > + unsigned long nr_purged;
> > > +
> > > + /*
> > > + * Control that only one user can pre-fetch this node.
> > > + */
> > > + atomic_t fill_in_progress;
> > > };
> > >
> > > static struct vmap_node *nodes, snode;
> > > @@ -804,6 +812,32 @@ addr_to_node(unsigned long addr)
> > > return &nodes[addr_to_node_id(addr)];
> > > }
> > >
> > > +static inline struct vmap_node *
> > > +id_to_node(int id)
> > > +{
> > > + return &nodes[id % nr_nodes];
> > > +}
> > > +
> > > +static inline int
> > > +this_node_id(void)
> > > +{
> > > + return raw_smp_processor_id() % nr_nodes;
> > > +}
> > > +
> > > +static inline unsigned long
> > > +encode_vn_id(int node_id)
> > > +{
> > > + /* Can store U8_MAX [0:254] nodes. */
> > > + return (node_id + 1) << BITS_PER_BYTE;
> > > +}
> > > +
> > > +static inline int
> > > +decode_vn_id(unsigned long val)
> > > +{
> > > + /* Can store U8_MAX [0:254] nodes. */
> > > + return (val >> BITS_PER_BYTE) - 1;
> > > +}
> >
> > This patch looks good to me. However, should we split out the encoding
> > vn_id into va->flags optimization into another patch? It looks like an
> > independent optimization which can be described better with specific
> > log. At least, in the pdf file pasted or patch log, it's not obvious
> > that:
> > 1) node's free tree could contains any address range;
> > 2) nodes' busy tree only contains address range belonging to this node;
> > - could contain crossing node range, corner case.
> > 3) nodes' purge tree could contain any address range;
> > - decided by encoded vn_id in va->flags.
> > - decided by address via addr_to_node(va->va_start).
> >
> > Personal opinion, feel it will make reviewing easier.
> >
> Sure, if it is easier to review, then i will split these two parts.
> All three statements are correct and valid. The pdf file only covers
> v1, so it is not up to date.
>
> Anyway i will update a cover letter in v3 with more details.
Maybe providing these details in patch log or cover letter is enough.
Leave it to you to decide if splitting patch is still needed. Thanks.
Powered by blists - more mailing lists