[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SJ0PR18MB5216F4E57D57AD4DF5013D15DB84A@SJ0PR18MB5216.namprd18.prod.outlook.com>
Date: Wed, 6 Dec 2023 16:33:44 +0000
From: Suman Ghosh <sumang@...vell.com>
To: Paolo Abeni <pabeni@...hat.com>,
Sunil Kovvuri Goutham
<sgoutham@...vell.com>,
Geethasowjanya Akula <gakula@...vell.com>,
Subbaraya
Sundeep Bhatta <sbhatta@...vell.com>,
Hariprasad Kelam <hkelam@...vell.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com"
<edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linu Cherian
<lcherian@...vell.com>,
Jerin Jacob Kollanukkaran <jerinj@...vell.com>,
"horms@...nel.org" <horms@...nel.org>,
"wojciech.drewek@...el.com"
<wojciech.drewek@...el.com>
Subject: RE: [EXT] Re: [net-next PATCH v6 1/2] octeontx2-af: Add new mbox to
support multicast/mirror offload
>On Mon, 2023-12-04 at 19:49 +0530, Suman Ghosh wrote:
>> A new mailbox is added to support offloading of multicast/mirror
>> functionality. The mailbox also supports dynamic updation of the
>> multicast/mirror list.
>>
>> Signed-off-by: Suman Ghosh <sumang@...vell.com>
>> Reviewed-by: Wojciech Drewek <wojciech.drewek@...el.com>
>> Reviewed-by: Simon Horman <horms@...nel.org>
>
>Note that v5 was already applied to net-next. But I still have a
>relevant note, see below.
>
>> @@ -5797,3 +6127,337 @@ int
>> rvu_mbox_handler_nix_bandprof_get_hwinfo(struct rvu *rvu, struct
>> msg_req *re
>>
>> return 0;
>> }
>> +
>> +static struct nix_mcast_grp_elem *rvu_nix_mcast_find_grp_elem(struct
>nix_mcast_grp *mcast_grp,
>> + u32 mcast_grp_idx)
>> +{
>> + struct nix_mcast_grp_elem *iter;
>> + bool is_found = false;
>> +
>> + mutex_lock(&mcast_grp->mcast_grp_lock);
>> + list_for_each_entry(iter, &mcast_grp->mcast_grp_head, list) {
>> + if (iter->mcast_grp_idx == mcast_grp_idx) {
>> + is_found = true;
>> + break;
>> + }
>> + }
>> + mutex_unlock(&mcast_grp->mcast_grp_lock);
>
>AFAICS, at this point another thread/CPU could kick-in and run
>rvu_mbox_handler_nix_mcast_grp_destroy() up to completion, freeing
>'iter' before it's later used by the current thread.
>
>What prevents such scenario?
>
>_If_ every mcast group manipulation happens under the rtnl lock, then
>you could as well completely remove the confusing mcast_grp_lock.
>
>Cheers,
>
>Paolo
[Suman] I added this lock because, these requests can come from some user-space application also. In that case, application will send a mailbox to kernel toad/del
Multicast nodes. But I got your point and there is indeed chances of race. Let me think through it and push a fix. So, what process should be followed here? Are you going to revert the change? Or I can push a separate fix on net tree?
Powered by blists - more mailing lists