[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DFNEDVIHWVSS.42X1VB6HKJBF@bootlin.com>
Date: Tue, 13 Jan 2026 11:43:08 +0100
From: Théo Lebrun <theo.lebrun@...tlin.com>
To: "Paolo Valerio" <pvalerio@...hat.com>, Théo Lebrun
<theo.lebrun@...tlin.com>, <netdev@...r.kernel.org>
Cc: "Nicolas Ferre" <nicolas.ferre@...rochip.com>, "Claudiu Beznea"
<claudiu.beznea@...on.dev>, "Andrew Lunn" <andrew+netdev@...n.ch>, "David
S. Miller" <davem@...emloft.net>, "Eric Dumazet" <edumazet@...gle.com>,
"Jakub Kicinski" <kuba@...nel.org>, "Paolo Abeni" <pabeni@...hat.com>,
"Lorenzo Bianconi" <lorenzo@...nel.org>, "Thomas Petazzoni"
<thomas.petazzoni@...tlin.com>, "Gregory Clement"
<gregory.clement@...tlin.com>
Subject: Re: [PATCH RFC net-next v2 3/8] cadence: macb: Add page pool
support handle multi-descriptor frame rx
On Mon Jan 12, 2026 at 3:16 PM CET, Paolo Valerio wrote:
> On 08 Jan 2026 at 04:43:43 PM, Théo Lebrun <theo.lebrun@...tlin.com> wrote:
>> On Sun Dec 21, 2025 at 12:51 AM CET, Paolo Valerio wrote:
>>> @@ -1382,58 +1382,118 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
>>> + first_frame = ctrl & MACB_BIT(RX_SOF);
>>> len = ctrl & bp->rx_frm_len_mask;
>>>
>>> - netdev_vdbg(bp->dev, "gem_rx %u (len %u)\n", entry, len);
>>> + if (len) {
>>> + data_len = len;
>>> + if (!first_frame)
>>> + data_len -= queue->skb->len;
>>> + } else {
>>> + data_len = bp->rx_buffer_size;
>>> + }
>>
>> Why deal with the `!len` case? How can it occur? User guide doesn't hint
>> that. It would mean we would grab uninitialised bytes as we assume len
>> is the max buffer size.
>
> Good point. After taking a second look, !len may not be the most reliable
> way to check this.
> From the datasheet, status signals are only valid (with some exceptions)
> when MACB_BIT(RX_EOF) is set. As a side effect, len is always zero on my
> hw for frames without the EOF bit, but it's probably better to just rely
> on MACB_BIT(RX_EOF) instead of reading something that may end up being
> unreliable.
100%, I do agree!
>>> + bp->rx_buffer_size = SKB_DATA_ALIGN(size);
>>> + if (gem_total_rx_buffer_size(bp) > PAGE_SIZE) {
>>> + overhead = bp->rx_headroom +
>>> + SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>>> + bp->rx_buffer_size = rounddown(PAGE_SIZE - overhead,
>>> + RX_BUFFER_MULTIPLE);
>>> + }
>>
>> I've seen your comment in [0/8]. Do you have any advice on how to test
>> this clamping? All I can think of is to either configure a massive MTU
>> or, more easily, cheat with the headroom.
>
> I normally test the set with 4k PAGE_SIZE and, as you said, setting the
> mtu to something bigger than that. This is still possible with 8k pages
> (given .jumbo_max_len = 10240).
Ah yes there is .jumbo_max_len, but our PAGE_SIZE==16K > .jumbo_max_len
so we cannot land in that codepath.
>> Also, should we warn? It means MTU-sized packets will be received in
>> fragments. It will work but is probably unexpected by users and a
>> slowdown reason that users might want to know about.
>
> I'm not sure about the warning as I don't see this as a user level detail.
> For debugging purpose, I guess we should be fine the last print out (even
> better once extended with your suggestion). Of course, feel free to disagree.
I'm fine with no warnings. We'll check our performance anyways. :-)
If it changes we'll notice.
Regards,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
Powered by blists - more mailing lists