lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250425090153.170f11bd@device-40.home>
Date: Fri, 25 Apr 2025 09:01:53 +0200
From: Maxime Chevallier <maxime.chevallier@...tlin.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, Andrew Lunn <andrew@...n.ch>, Eric Dumazet
 <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Heiner Kallweit
 <hkallweit1@...il.com>, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org, thomas.petazzoni@...tlin.com,
 linux-arm-kernel@...ts.infradead.org, Christophe Leroy
 <christophe.leroy@...roup.eu>, Herve Codina <herve.codina@...tlin.com>,
 Florian Fainelli <f.fainelli@...il.com>, Russell King
 <linux@...linux.org.uk>, Vladimir Oltean <vladimir.oltean@....com>,
 Köry Maincent <kory.maincent@...tlin.com>, Oleksij Rempel
 <o.rempel@...gutronix.de>, Simon Horman <horms@...nel.org>, Romain Gantois
 <romain.gantois@...tlin.com>, Piergiorgio Beruto
 <piergiorgio.beruto@...il.com>
Subject: Re: [PATCH net-next v7 1/3] net: ethtool: Introduce per-PHY DUMP
 operations

Hi Jakub,

On Thu, 24 Apr 2025 18:03:33 -0700
Jakub Kicinski <kuba@...nel.org> wrote:

> On Tue, 22 Apr 2025 18:17:14 +0200 Maxime Chevallier wrote:
> > +/* perphy ->start() handler for GET requests */  
> 
> Just because I think there are real bugs, I will allow myself an
> uber-nit of asking to spell the perphy as per-PHY or such in the
> comment? :)

No problem :) Thanks a lot for the review

> > +static int ethnl_perphy_start(struct netlink_callback *cb)
> > +{
> > +	struct ethnl_perphy_dump_ctx *phy_ctx = ethnl_perphy_dump_context(cb);
> > +	const struct genl_dumpit_info *info = genl_dumpit_info(cb);
> > +	struct ethnl_dump_ctx *ctx = &phy_ctx->ethnl_ctx;
> > +	struct ethnl_reply_data *reply_data;
> > +	const struct ethnl_request_ops *ops;
> > +	struct ethnl_req_info *req_info;
> > +	struct genlmsghdr *ghdr;
> > +	int ret;
> > +
> > +	BUILD_BUG_ON(sizeof(*ctx) > sizeof(cb->ctx));
> > +
> > +	ghdr = nlmsg_data(cb->nlh);
> > +	ops = ethnl_default_requests[ghdr->cmd];
> > +	if (WARN_ONCE(!ops, "cmd %u has no ethnl_request_ops\n", ghdr->cmd))
> > +		return -EOPNOTSUPP;
> > +	req_info = kzalloc(ops->req_info_size, GFP_KERNEL);
> > +	if (!req_info)
> > +		return -ENOMEM;
> > +	reply_data = kmalloc(ops->reply_data_size, GFP_KERNEL);
> > +	if (!reply_data) {
> > +		ret = -ENOMEM;
> > +		goto free_req_info;
> > +	}
> > +
> > +	/* Don't ignore the dev even for DUMP requests */  
> 
> another nit, this comment wasn't super clear without looking at the dump
> for non-per-phy case. Maybe:
> 
> 	/* Unlike per-dev dump, don't ignore dev. The dump handler
> 	 * will notice it and dump PHYs from given dev.
> 	 */
> ?

That's better indeed :)

> > +	ret = ethnl_default_parse(req_info, &info->info, ops, false);
> > +	if (ret < 0)
> > +		goto free_reply_data;
> > +
> > +	ctx->ops = ops;
> > +	ctx->req_info = req_info;
> > +	ctx->reply_data = reply_data;
> > +	ctx->pos_ifindex = 0;
> > +
> > +	return 0;
> > +
> > +free_reply_data:
> > +	kfree(reply_data);
> > +free_req_info:
> > +	kfree(req_info);
> > +
> > +	return ret;
> > +}
> > +
> > +static int ethnl_perphy_dump_one_dev(struct sk_buff *skb,
> > +				     struct net_device *dev,
> > +				     struct ethnl_perphy_dump_ctx *ctx,
> > +				     const struct genl_info *info)
> > +{
> > +	struct ethnl_dump_ctx *ethnl_ctx = &ctx->ethnl_ctx;
> > +	struct phy_device_node *pdn;
> > +	int ret = 0;
> > +
> > +	if (!dev->link_topo)
> > +		return 0;  
> 
> Now for the bugs..
> 
> > +	xa_for_each_start(&dev->link_topo->phys, ctx->pos_phyindex, pdn,
> > +			  ctx->pos_phyindex) {
> > +		ethnl_ctx->req_info->phy_index = ctx->pos_phyindex;
> > +
> > +		/* We can re-use the original dump_one as ->prepare_data in
> > +		 * commands use ethnl_req_get_phydev(), which gets the PHY from
> > +		 * the req_info->phy_index
> > +		 */
> > +		ret = ethnl_default_dump_one(skb, dev, ethnl_ctx, info);
> > +		if (ret)
> > +			break;  
> 
> 		return ret;
> 
> > +	}  
> 
> 	ctx->pos_phyindex = 0;
> 
> 	return 0;
> 
> IOW I don't see you resetting the pos_phyindex, so I think you'd only
> dump correctly the first device? The next device will try to dump its
> PHYs starting from the last index of the previous dev's PHY? [1]

That is true... My mistake was to test on a system with one PHY only on
the first interface and a lot on the second, I'll adjust my tests and
fix that, thanks a lot for spotting !

> 
> > +	return ret;
> > +}
> > +
> > +static int ethnl_perphy_dump_all_dev(struct sk_buff *skb,
> > +				     struct ethnl_perphy_dump_ctx *ctx,
> > +				     const struct genl_info *info)
> > +{
> > +	struct ethnl_dump_ctx *ethnl_ctx = &ctx->ethnl_ctx;
> > +	struct net *net = sock_net(skb->sk);
> > +	netdevice_tracker dev_tracker;
> > +	struct net_device *dev;
> > +	int ret = 0;
> > +
> > +	rcu_read_lock();
> > +	for_each_netdev_dump(net, dev, ethnl_ctx->pos_ifindex) {
> > +		netdev_hold(dev, &dev_tracker, GFP_ATOMIC);
> > +		rcu_read_unlock();
> > +
> > +		/* per-PHY commands use ethnl_req_get_phydev(), which needs the
> > +		 * net_device in the req_info
> > +		 */
> > +		ethnl_ctx->req_info->dev = dev;
> > +		ret = ethnl_perphy_dump_one_dev(skb, dev, ctx, info);
> > +
> > +		rcu_read_lock();
> > +		netdev_put(dev, &dev_tracker);  
> 
> missing
> 
> 		ethnl_ctx->req_info->dev = NULL;
> 
> right? Otherwise if we need to send multiple skbs the "continuation"
> one will think we're doing a filtered dump.
> 
> Looking at commits 7c93a88785dae6 and c0111878d45e may be helpful,
> but I doubt you can test it on a real system, filling even 4kB
> may be hard for small messages :(

Ah damn, yes. I got that right I think in net/ethtool/phy.c but not
here.

As for testing, I do have a local patch to add PHY support to
netdevsim, allowing me to add an arbitrary number of PHYs to any nsim
devices. I'll make sure to test this case. I still plan to upstream the
netdevsim part at some point, but that still needs a bit of polishing...

> > +		if (ret < 0 && ret != -EOPNOTSUPP) {
> > +			if (likely(skb->len))
> > +				ret = skb->len;
> > +			break;
> > +		}
> > +		ret = 0;  
> 
> [1] or you can clear the pos_index here
> 
> > +	}
> > +	rcu_read_unlock();
> > +
> > +	return ret;
> > +}
> > +
> > +/* perphy ->dumpit() handler for GET requests. */
> > +static int ethnl_perphy_dumpit(struct sk_buff *skb,
> > +			       struct netlink_callback *cb)
> > +{
> > +	struct ethnl_perphy_dump_ctx *ctx = ethnl_perphy_dump_context(cb);
> > +	struct ethnl_dump_ctx *ethnl_ctx = &ctx->ethnl_ctx;
> > +	int ret = 0;
> > +
> > +	if (ethnl_ctx->req_info->dev) {
> > +		ret = ethnl_perphy_dump_one_dev(skb, ethnl_ctx->req_info->dev,
> > +						ctx, genl_info_dump(cb));
> > +		if (ret < 0 && ret != -EOPNOTSUPP && likely(skb->len))
> > +			ret = skb->len;
> > +
> > +		netdev_put(ethnl_ctx->req_info->dev,
> > +			   &ethnl_ctx->req_info->dev_tracker);  
> 
> You have to release this in .done
> dumpit gets called multiple times until we run out of objects to dump.
> OTOH user may close the socket without finishing the dump operation.
> So all .dumpit implementations must be "balanced". The only state we
> should touch in them is the dump context to know where to pick up from
> next time.

Thanks for poiting it out.

Now that you say that, I guess that I should also move the reftracker
I'm using for the netdev_hold in ethnl_perphy_dump_one_dev() call to
struct ethnl_perphy_dump_ctx ? That way we make sure the netdev doesn't
go away in-between the multiple .dumpit() calls then...

Is that correct ?

> > +	} else {
> > +		ret = ethnl_perphy_dump_all_dev(skb, ctx, genl_info_dump(cb));
> > +	}
> > +
> > +	return ret;
> > +}  

Thanks a lot for the review, that's most helpful.

Maxime

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ