[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <853a21e6479b44b10b8f6c9874124c82c13bed3c.camel@redhat.com>
Date: Wed, 06 Dec 2023 21:14:18 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Suman Ghosh <sumang@...vell.com>, Sunil Kovvuri Goutham
<sgoutham@...vell.com>, Geethasowjanya Akula <gakula@...vell.com>,
Subbaraya Sundeep Bhatta <sbhatta@...vell.com>, Hariprasad Kelam
<hkelam@...vell.com>, "davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>, "kuba@...nel.org"
<kuba@...nel.org>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Linu Cherian
<lcherian@...vell.com>, Jerin Jacob Kollanukkaran <jerinj@...vell.com>,
"horms@...nel.org" <horms@...nel.org>, "wojciech.drewek@...el.com"
<wojciech.drewek@...el.com>
Subject: Re: [EXT] Re: [net-next PATCH v6 1/2] octeontx2-af: Add new mbox to
support multicast/mirror offload
On Wed, 2023-12-06 at 16:33 +0000, Suman Ghosh wrote:
> > On Mon, 2023-12-04 at 19:49 +0530, Suman Ghosh wrote:
> > > A new mailbox is added to support offloading of multicast/mirror
> > > functionality. The mailbox also supports dynamic updation of the
> > > multicast/mirror list.
> > >
> > > Signed-off-by: Suman Ghosh <sumang@...vell.com>
> > > Reviewed-by: Wojciech Drewek <wojciech.drewek@...el.com>
> > > Reviewed-by: Simon Horman <horms@...nel.org>
> >
> > Note that v5 was already applied to net-next. But I still have a
> > relevant note, see below.
> >
> > > @@ -5797,3 +6127,337 @@ int
> > > rvu_mbox_handler_nix_bandprof_get_hwinfo(struct rvu *rvu, struct
> > > msg_req *re
> > >
> > > return 0;
> > > }
> > > +
> > > +static struct nix_mcast_grp_elem
> > > *rvu_nix_mcast_find_grp_elem(struct
> > nix_mcast_grp *mcast_grp,
> > > +
> > > u32 mcast_grp_idx)
> > > +{
> > > + struct nix_mcast_grp_elem *iter;
> > > + bool is_found = false;
> > > +
> > > + mutex_lock(&mcast_grp->mcast_grp_lock);
> > > + list_for_each_entry(iter, &mcast_grp->mcast_grp_head,
> > > list) {
> > > + if (iter->mcast_grp_idx == mcast_grp_idx) {
> > > + is_found = true;
> > > + break;
> > > + }
> > > + }
> > > + mutex_unlock(&mcast_grp->mcast_grp_lock);
> >
> > AFAICS, at this point another thread/CPU could kick-in and run
> > rvu_mbox_handler_nix_mcast_grp_destroy() up to completion, freeing
> > 'iter' before it's later used by the current thread.
> >
> > What prevents such scenario?
> >
> > _If_ every mcast group manipulation happens under the rtnl lock,
> > then
> > you could as well completely remove the confusing mcast_grp_lock.
> >
> > Cheers,
> >
> > Paolo
> [Suman] I added this lock because, these requests can come from some
> user-space application also. In that case, application will send a
> mailbox to kernel toad/del
> Multicast nodes. But I got your point and there is indeed chances of
> race. Let me think through it and push a fix. So, what process should
> be followed here? Are you going to revert the change? Or I can push a
> separate fix on net tree?
You can push a follow-up fix.
We could end-up reverting the patch only if the fix will take too long
to land here, and the issue will start hitting people.
Cheers,
Paolo
Powered by blists - more mailing lists