[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <MW4PR11MB57761738D8B4857AC627C11CFD4DA@MW4PR11MB5776.namprd11.prod.outlook.com>
Date: Mon, 5 Jun 2023 13:53:05 +0000
From: "Drewek, Wojciech" <wojciech.drewek@...el.com>
To: Simon Horman <simon.horman@...igine.com>
CC: "intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Lobakin, Aleksander"
<aleksander.lobakin@...el.com>, "Ertman, David M" <david.m.ertman@...el.com>,
"michal.swiatkowski@...ux.intel.com" <michal.swiatkowski@...ux.intel.com>,
"marcin.szycik@...ux.intel.com" <marcin.szycik@...ux.intel.com>,
"Chmielewski, Pawel" <pawel.chmielewski@...el.com>, "Samudrala, Sridhar"
<sridhar.samudrala@...el.com>, "pmenzel@...gen.mpg.de"
<pmenzel@...gen.mpg.de>, "dan.carpenter@...aro.org"
<dan.carpenter@...aro.org>, Jakub Kicinski <kuba@...nel.org>, Eric Dumazet
<edumazet@...gle.com>
Subject: RE: [PATCH iwl-next v4 07/13] ice: Switchdev FDB events support
> -----Original Message-----
> From: Simon Horman <simon.horman@...igine.com>
> Sent: niedziela, 4 czerwca 2023 16:18
> To: Drewek, Wojciech <wojciech.drewek@...el.com>
> Cc: intel-wired-lan@...ts.osuosl.org; netdev@...r.kernel.org; Lobakin, Aleksander <aleksander.lobakin@...el.com>; Ertman, David M
> <david.m.ertman@...el.com>; michal.swiatkowski@...ux.intel.com; marcin.szycik@...ux.intel.com; Chmielewski, Pawel
> <pawel.chmielewski@...el.com>; Samudrala, Sridhar <sridhar.samudrala@...el.com>; pmenzel@...gen.mpg.de;
> dan.carpenter@...aro.org; Jakub Kicinski <kuba@...nel.org>; Eric Dumazet <edumazet@...gle.com>
> Subject: Re: [PATCH iwl-next v4 07/13] ice: Switchdev FDB events support
>
> + Jakub, Eric
>
> On Wed, May 24, 2023 at 02:21:15PM +0200, Wojciech Drewek wrote:
> > Listen for SWITCHDEV_FDB_{ADD|DEL}_TO_DEVICE events while in switchdev
> > mode. Accept these events on both uplink and VF PR ports. Add HW
> > rules in newly created workqueue. FDB entries are stored in rhashtable
> > for lookup when removing the entry and in the list for cleanup
> > purpose. Direction of the HW rule depends on the type of the ports
> > on which the FDB event was received:
> >
> > ICE_ESWITCH_BR_UPLINK_PORT:
> > TX rule that forwards the packet to the LAN (egress).
> >
> > ICE_ESWITCH_BR_VF_REPR_PORT:
> > RX rule that forwards the packet to the VF associated
> > with the port representor.
> >
> > In both cases the rule matches on the dst mac address.
> > All the FDB entries are stored in the bridge structure.
> > When the port is removed all the FDB entries associated with
> > this port are removed as well. This is achieved thanks to the reference
> > to the port that FDB entry holds.
> >
> > In the fwd rule we use only one lookup type (MAC address)
> > but lkups_cnt variable is already introduced because
> > we will have more lookups in the subsequent patches.
> >
> > Signed-off-by: Wojciech Drewek <wojciech.drewek@...el.com>
>
> ...
>
> > +static void
> > +ice_eswitch_br_fdb_event_work(struct work_struct *work)
> > +{
> > + struct ice_esw_br_fdb_work *fdb_work = ice_work_to_fdb_work(work);
> > + bool added_by_user = fdb_work->fdb_info.added_by_user;
> > + struct ice_esw_br_port *br_port = fdb_work->br_port;
> > + const unsigned char *mac = fdb_work->fdb_info.addr;
> > + u16 vid = fdb_work->fdb_info.vid;
> > +
> > + rtnl_lock();
> > +
> > + if (!br_port || !br_port->bridge)
> > + goto err_exit;
> > +
> > + switch (fdb_work->event) {
> > + case SWITCHDEV_FDB_ADD_TO_DEVICE:
> > + ice_eswitch_br_fdb_entry_create(fdb_work->dev, br_port,
> > + added_by_user, mac, vid);
> > + break;
> > + case SWITCHDEV_FDB_DEL_TO_DEVICE:
> > + ice_eswitch_br_fdb_entry_find_and_delete(br_port->bridge,
> > + mac, vid);
> > + break;
> > + default:
> > + goto err_exit;
> > + }
> > +
> > +err_exit:
> > + rtnl_unlock();
> > + dev_put(fdb_work->dev);
>
> Hi Wojciech,
>
> I notice that the CI flags this as use of a deprecated API.
> So I'm wondering if it would be better written using netdev_put()
> And likewise, I'm wondering if other users in the ice driver should be
> updated.
Sure I can use it but I don't see any examples of device drivers using it (API is quite new, that's probably why).
Also that would mean that I should declare my own tracker, right?
Is it safe to use such tracker for devices not owned by our driver (LAG case)? I feel like the tracker should
be corelated with only one device or it might be used for many devices?
>
> > + ice_eswitch_br_fdb_work_dealloc(fdb_work);
> > +}
>
> ...
>
> > +static int
> > +ice_eswitch_br_switchdev_event(struct notifier_block *nb,
> > + unsigned long event, void *ptr)
> > +{
> > + struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
> > + struct switchdev_notifier_fdb_info *fdb_info;
> > + struct switchdev_notifier_info *info = ptr;
> > + struct ice_esw_br_offloads *br_offloads;
> > + struct ice_esw_br_fdb_work *work;
> > + struct ice_esw_br_port *br_port;
> > + struct netlink_ext_ack *extack;
> > + struct net_device *upper;
> > +
> > + br_offloads = ice_nb_to_br_offloads(nb, switchdev_nb);
> > + extack = switchdev_notifier_info_to_extack(ptr);
> > +
> > + upper = netdev_master_upper_dev_get_rcu(dev);
> > + if (!upper)
> > + return NOTIFY_DONE;
> > +
> > + if (!netif_is_bridge_master(upper))
> > + return NOTIFY_DONE;
> > +
> > + if (!ice_eswitch_br_is_dev_valid(dev))
> > + return NOTIFY_DONE;
> > +
> > + br_port = ice_eswitch_br_netdev_to_port(dev);
> > + if (!br_port)
> > + return NOTIFY_DONE;
> > +
> > + switch (event) {
> > + case SWITCHDEV_FDB_ADD_TO_DEVICE:
> > + case SWITCHDEV_FDB_DEL_TO_DEVICE:
> > + fdb_info = container_of(info, typeof(*fdb_info), info);
> > +
> > + work = ice_eswitch_br_fdb_work_alloc(fdb_info, br_port, dev,
> > + event);
> > + if (IS_ERR(work)) {
> > + NL_SET_ERR_MSG_MOD(extack, "Failed to init switchdev fdb work");
> > + return notifier_from_errno(PTR_ERR(work));
> > + }
> > + dev_hold(dev);
>
> Likewise, I'm wondering if this should be netdev_hold().
>
> > +
> > + queue_work(br_offloads->wq, &work->work);
> > + break;
> > + default:
> > + break;
> > + }
> > + return NOTIFY_DONE;
> > +}
> > +
>
> ...
Powered by blists - more mailing lists