[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8e7055a9-f899-4aad-8ed7-6543077c05d1@paulmck-laptop>
Date: Wed, 17 Sep 2025 04:36:12 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Harry Yoo <harry.yoo@...cle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Uladzislau Rezki <urezki@...il.com>,
Sidhartha Kumar <sidhartha.kumar@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
maple-tree@...ts.infradead.org
Subject: Re: [PATCH v8 04/23] slab: add sheaf support for batching
kfree_rcu() operations
On Wed, Sep 17, 2025 at 11:55:10AM +0200, Vlastimil Babka wrote:
> On 9/17/25 10:30, Harry Yoo wrote:
> > On Wed, Sep 10, 2025 at 10:01:06AM +0200, Vlastimil Babka wrote:
> >> +/* needed for kvfree_rcu_barrier() */
> >> +void flush_all_rcu_sheaves()
> >> +{
> >> + struct slub_percpu_sheaves *pcs;
> >> + struct slub_flush_work *sfw;
> >> + struct kmem_cache *s;
> >> + bool flushed = false;
> >> + unsigned int cpu;
> >> +
> >> + cpus_read_lock();
> >> + mutex_lock(&slab_mutex);
> >> +
> >> + list_for_each_entry(s, &slab_caches, list) {
> >> + if (!s->cpu_sheaves)
> >> + continue;
> >> +
> >> + mutex_lock(&flush_lock);
> >> +
> >> + for_each_online_cpu(cpu) {
> >> + sfw = &per_cpu(slub_flush, cpu);
> >> + pcs = per_cpu_ptr(s->cpu_sheaves, cpu);
> >> +
> >> + if (!pcs->rcu_free || !pcs->rcu_free->size) {
> >
> > Is the compiler allowed to compile this to read pcs->rcu_free twice?
> > Something like:
> >
> > flush_all_rcu_sheaves() __kfree_rcu_sheaf()
> >
> > pcs->rcu_free != NULL
> > pcs->rcu_free = NULL
> > pcs->rcu_free == NULL
> > /* NULL-pointer-deref */
> > pcs->rcu_free->size
>
> Good point, I'll remove the size check and simply pcs->rcu_free non-null
> means we flush.
>
> >> + sfw->skip = true;
> >> + continue;
> >> + }
> >>
> >> + INIT_WORK(&sfw->work, flush_rcu_sheaf);
> >> + sfw->skip = false;
> >> + sfw->s = s;
> >> + queue_work_on(cpu, flushwq, &sfw->work);
> >> + flushed = true;
> >> + }
> >> +
> >> + for_each_online_cpu(cpu) {
> >> + sfw = &per_cpu(slub_flush, cpu);
> >> + if (sfw->skip)
> >> + continue;
> >> + flush_work(&sfw->work);
> >> + }
> >> +
> >> + mutex_unlock(&flush_lock);
> >> + }
> >> +
> >> + mutex_unlock(&slab_mutex);
> >> + cpus_read_unlock();
> >> +
> >> + if (flushed)
> >> + rcu_barrier();
> >
> > I think we need to call rcu_barrier() even if flushed == false?
> >
> > Maybe a kvfree_rcu()'d object was already waiting for the rcu callback to
> > be processed before flush_all_rcu_sheaves() is called, and
> > in flush_all_rcu_sheaves() we skipped all (cache, cpu) pairs,
> > so flushed == false but the rcu callback isn't processed yet
> > by the end of the function?
> >
> > That sounds like a very unlikely to happen in a realistic scenario,
> > but still possible...
>
> Yes also good point, will flush unconditionally.
>
> Maybe in __kfree_rcu_sheaf() I should also move the call_rcu(...) before
> local_unlock(). So we don't end up seeing a NULL pcs->rcu_free in
> flush_all_rcu_sheaves() because __kfree_rcu_sheaf() already set it to NULL,
> but didn't yet do the call_rcu() as it got preempted after local_unlock().
>
> But then rcu_barrier() itself probably won't mean we make sure such cpus
> finished the local_locked section, if we didn't queue work on them. So maybe
> we need synchronize_rcu()?
Do you need both rcu_barrier() and synchronize_rcu(), maybe along with
kvfree_rcu_barrier() as well? It would not be hard to make such a thing,
using workqueues or some such. Not sure what the API should look like,
especially should people want other RCU flavors to get into the act
as well.
Thanx, Paul
Powered by blists - more mailing lists