[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnBsomxy_cCnnIBy@zx2c4.com>
Date: Mon, 17 Jun 2024 19:04:34 +0200
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Uladzislau Rezki <urezki@...il.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Jakub Kicinski <kuba@...nel.org>,
Julia Lawall <Julia.Lawall@...ia.fr>, linux-block@...r.kernel.org,
kernel-janitors@...r.kernel.org, bridge@...ts.linux.dev,
linux-trace-kernel@...r.kernel.org,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
kvm@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
"Naveen N. Rao" <naveen.n.rao@...ux.ibm.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Nicholas Piggin <npiggin@...il.com>, netdev@...r.kernel.org,
wireguard@...ts.zx2c4.com, linux-kernel@...r.kernel.org,
ecryptfs@...r.kernel.org, Neil Brown <neilb@...e.de>,
Olga Kornievskaia <kolga@...app.com>, Dai Ngo <Dai.Ngo@...cle.com>,
Tom Talpey <tom@...pey.com>, linux-nfs@...r.kernel.org,
linux-can@...r.kernel.org, Lai Jiangshan <jiangshanlai@...il.com>,
netfilter-devel@...r.kernel.org, coreteam@...filter.org
Subject: Re: [PATCH 00/14] replace call_rcu by kfree_rcu for simple
kmem_cache_free callback
On Mon, Jun 17, 2024 at 06:38:52PM +0200, Vlastimil Babka wrote:
> On 6/17/24 6:33 PM, Jason A. Donenfeld wrote:
> > On Mon, Jun 17, 2024 at 6:30 PM Uladzislau Rezki <urezki@...il.com> wrote:
> >> Here if an "err" is less then "0" means there are still objects
> >> whereas "is_destroyed" is set to "true" which is not correlated
> >> with a comment:
> >>
> >> "Destruction happens when no objects"
> >
> > The comment is just poorly written. But the logic of the code is right.
> >
> >>
> >> > out_unlock:
> >> > mutex_unlock(&slab_mutex);
> >> > cpus_read_unlock();
> >> > diff --git a/mm/slub.c b/mm/slub.c
> >> > index 1373ac365a46..7db8fe90a323 100644
> >> > --- a/mm/slub.c
> >> > +++ b/mm/slub.c
> >> > @@ -4510,6 +4510,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
> >> > return;
> >> > trace_kmem_cache_free(_RET_IP_, x, s);
> >> > slab_free(s, virt_to_slab(x), x, _RET_IP_);
> >> > + if (s->is_destroyed)
> >> > + kmem_cache_destroy(s);
> >> > }
> >> > EXPORT_SYMBOL(kmem_cache_free);
> >> >
> >> > @@ -5342,9 +5344,6 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> >> > if (!slab->inuse) {
> >> > remove_partial(n, slab);
> >> > list_add(&slab->slab_list, &discard);
> >> > - } else {
> >> > - list_slab_objects(s, slab,
> >> > - "Objects remaining in %s on __kmem_cache_shutdown()");
> >> > }
> >> > }
> >> > spin_unlock_irq(&n->list_lock);
> >> >
> >> Anyway it looks like it was not welcome to do it in the kmem_cache_free()
> >> function due to performance reason.
> >
> > "was not welcome" - Vlastimil mentioned *potential* performance
> > concerns before I posted this. I suspect he might have a different
> > view now, maybe?
> >
> > Vlastimil, this is just checking a boolean (which could be
> > unlikely()'d), which should have pretty minimal overhead. Is that
> > alright with you?
>
> Well I doubt we can just set and check it without any barriers? The
> completion of the last pending kfree_rcu() might race with
> kmem_cache_destroy() in a way that will leave the cache there forever, no?
> And once we add barriers it becomes a perf issue?
Hm, yea you might be right about barriers being required. But actually,
might this point toward a larger problem with no matter what approach,
polling or event, is chosen? If the current rule is that
kmem_cache_free() must never race with kmem_cache_destroy(), because
users have always made diligent use of call_rcu()/rcu_barrier() and
such, but now we're going to let those race with each other - either by
my thing above or by polling - so we're potentially going to get in trouble
and need some barriers anyway.
I think?
Jason
Powered by blists - more mailing lists