[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <DEONI4HOMXZZ.ATPIJ2O56QW0@bootlin.com>
Date: Wed, 03 Dec 2025 15:28:32 +0100
From: Théo Lebrun <theo.lebrun@...tlin.com>
To: "Paolo Valerio" <pvalerio@...hat.com>, Théo Lebrun
<theo.lebrun@...tlin.com>, <netdev@...r.kernel.org>
Cc: "Nicolas Ferre" <nicolas.ferre@...rochip.com>, "Claudiu Beznea"
<claudiu.beznea@...on.dev>, "Andrew Lunn" <andrew+netdev@...n.ch>, "David
S. Miller" <davem@...emloft.net>, "Eric Dumazet" <edumazet@...gle.com>,
"Jakub Kicinski" <kuba@...nel.org>, "Paolo Abeni" <pabeni@...hat.com>,
"Lorenzo Bianconi" <lorenzo@...nel.org>
Subject: Re: [PATCH RFC net-next 0/6] net: macb: Add XDP support and page
pool integration
On Tue Dec 2, 2025 at 6:24 PM CET, Paolo Valerio wrote:
> On 26 Nov 2025 at 07:08:14 PM, Théo Lebrun <theo.lebrun@...tlin.com> wrote:
>> ### Rx buffer size computation
[...]
>> - NET_IP_ALIGN is accounted for in the headroom even though it isn't
>> present if !RSC.
>
> that's something I noticed and I was a unsure about the reason.
Mistake because I forgot, nothing more than that.
>> - If the size clamping to PAGE_SIZE comes into play, we are probably
>> doomed. It means we cannot deal with the MTU and we'll probably get
>> corruption. If we do put a check in place, it should loudly fail
>> rather than silently clamp.
>
> That should not happen, unless I'm missing something.
> E.g., 9000B mtu on a 4K PAGE_SIZE kernel should be handled with multiple
> descriptors. The clamping is there because according with how the series
> creates the pool, the maximum buffer size is page order 0.
>
> Hardware-wise bp->rx_buffer_size should also be taken into account for
> the receive buffer size.
Yes I agree. We can drop the check, I was not implying we *had* to keep
the check.
[...]
>> ### Buffer variable names
>>
>> Related: so many variables, fields or constants have ambiguous names,
>> can we do something about it?
>>
>> - bp->rx_offset is named oddly to my ears. Offset to what?
>> Maybe bp->rx_head or bp->rx_headroom?
>
> bp->rx_headroom sounds a good choice to me, but if you have a stronger
> preference for bp->rx_head just let me know.
No strong preference, ack for bp->rx_headroom.
[...]
>> ### XDP_SETUP_PROG if netif_running()
>>
>> I'd like to start a discussion on the expected behavior on XDP program
>> change if netif_running(). Summarised:
>>
>> static int gem_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
>> struct netlink_ext_ack *extack)
>> {
>> bool running = netif_running(dev);
>> bool need_update = !!bp->prog != !!prog;
>>
>> if (running && need_update)
>> macb_close(dev);
>> old_prog = rcu_replace_pointer(bp->prog, prog, lockdep_rtnl_is_held());
>> if (running && need_update)
>> return macb_open(dev);
>> }
>>
>> Have you experimented with that? I don't see anything graceful in our
>> close operation, it looks like we'll get corruption or dropped packets
>> or both. We shouldn't impose that on the user who just wanted to swap
>> the program.
>>
>> I cannot find any good reason that implies we wouldn't be able to swap
>> our XDP program on the fly. If we think it is unsafe, I'd vote for
>> starting with a -EBUSY return code and iterating on that.
>>
>
> I didn't experiment much with this, other than simply adding and
> removing programs as needed during my tests. Didn't experience
> particular issues.
>
> The reason a close/open sequence was added here was mostly because I was
> considering to account XDP_PACKET_HEADROOM only when a program was
> present. I later decided to not proceed with that (mostly to avoid
> changing too many things at once).
>
> Given the geometry of the buffer remains untouched in either case, I
> see no particular reasons we can't swap on the fly as you suggest.
>
> I'll try this and change it, thanks!
Yes! I had guessed that you thought about changing the headroom based on
XDP or !XDP by reading the code. :-)
I agree we should aim for on-the-fly swapping available in all cases,
it sounds reasonable to achieve and a nice-to-have feature.
Regards,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
Powered by blists - more mailing lists