[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpG6cCo1WUo3N116DOavmRE6=aeS_s2Hzceqdytgc955xw@mail.gmail.com>
Date: Thu, 15 May 2025 08:03:59 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: "Liam R. Howlett" <Liam.Howlett@...cle.com>, Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>, Roman Gushchin <roman.gushchin@...ux.dev>,
Harry Yoo <harry.yoo@...cle.com>, Uladzislau Rezki <urezki@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
maple-tree@...ts.infradead.org
Subject: Re: [PATCH v4 2/9] slab: add sheaf support for batching kfree_rcu() operations
On Thu, May 15, 2025 at 1:45 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 5/14/25 16:01, Vlastimil Babka wrote:
> > On 5/6/25 23:34, Suren Baghdasaryan wrote:
> >> On Fri, Apr 25, 2025 at 1:27 AM Vlastimil Babka <vbabka@...e.cz> wrote:
> >>> @@ -2631,6 +2637,24 @@ static void sheaf_flush_unused(struct kmem_cache *s, struct slab_sheaf *sheaf)
> >>> sheaf->size = 0;
> >>> }
> >>>
> >>> +static void __rcu_free_sheaf_prepare(struct kmem_cache *s,
> >>> + struct slab_sheaf *sheaf);
> >>
> >> I think you could safely move __rcu_free_sheaf_prepare() here and
> >> avoid the above forward declaration.
> >
> > Right, done.
> >
> >>> @@ -5304,6 +5340,140 @@ bool free_to_pcs(struct kmem_cache *s, void *object)
> >>> return true;
> >>> }
> >>>
> >>> +static void __rcu_free_sheaf_prepare(struct kmem_cache *s,
> >>> + struct slab_sheaf *sheaf)
> >>
> >> This function seems to be an almost exact copy of free_to_pcs_bulk()
> >> from your previous patch. Maybe they can be consolidated?
> >
> > True, I've extracted it to __kmem_cache_free_bulk_prepare().
>
> ... and that was a mistake as free_to_pcs_bulk() diverges in patch 9/9 in a
> way that this makes it too infeasible
Ah, I see. Makes sense. Sorry for the misleading suggestion.
Powered by blists - more mailing lists