[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8735wlmuxh.fsf@waldekranz.com>
Date: Tue, 23 Mar 2021 22:17:30 +0100
From: Tobias Waldekranz <tobias@...dekranz.com>
To: Vladimir Oltean <olteanv@...il.com>
Cc: davem@...emloft.net, kuba@...nel.org, andrew@...n.ch,
vivien.didelot@...il.com, f.fainelli@...il.com,
netdev@...r.kernel.org
Subject: Re: [PATCH net-next] net: dsa: mv88e6xxx: Allow dynamic reconfiguration of tag protocol
On Tue, Mar 23, 2021 at 21:03, Vladimir Oltean <olteanv@...il.com> wrote:
> On Tue, Mar 23, 2021 at 03:48:51PM +0100, Tobias Waldekranz wrote:
>> On Tue, Mar 23, 2021 at 13:35, Vladimir Oltean <olteanv@...il.com> wrote:
>> > The netdev_uses_dsa thing is a bit trashy, I think that a more polished
>> > version should rather set NETIF_F_RXALL for the DSA master, and have the
>> > dpaa driver act upon that. But first I'm curious if it works.
>>
>> It does work. Thank you!
>
> Happy to hear that.
>
>> Does setting RXALL mean that the master would accept frames with a bad
>> FCS as well?
>
> Do you mean from the perspective of the network stack, or of the hardware?
>
> As far as the hardware is concerned, here is what the manual has to say:
>
> Frame reception from the network may encounter certain error conditions.
> Such errors are reported by the Ethernet MAC when the frame is transferred
> to the Buffer Manager Interface (BMI). The action taken per error case
> is described below. Besides the interrupts, the BMI is capable of
> recognizing several conditions and setting a corresponding flag in FD
> status field for Host usage. These conditions are as follows:
>
> * Physical Error. One of the following events were detected by the
> Ethernet MAC: Rx FIFO overflow, FCS error, code error, running
> disparity error (in applicable modes), FIFO parity error, PHY Sequence
> error, PHY error control character detected, CRC error. The BMI
> discards the frame, or enqueue directly to EFQID if FMBM_RCFG[FDOVR]
> is set [ editor's note: this is what my patch does ]. FPE bit is set
> in the FD status.
> * Frame size error. The Ethernet MAC detected a frame that its length
> exceeds the maximum allowed as configured in the MAC registers. The
> frame is truncated by the MAC to the maximum allowed, and it is marked
> as truncated. The BMI sets FSE in the FD status and forwards the frame
> to next module in the FMan as usual.
> * Some other network error may result in the frame being discarded by
> the MAC and not shown to the BMI. However, the MAC is responsible for
> counting such errors in its own statistics counters.
>
> So yes, packets with bad FCS are accepted with FMBM_RCFG[FDOVR] set.
> But it would be interesting to see what is the value of "fd_status" in
> rx_default_dqrr() for bad packets. You know, in the DPAA world, the
> correct approach to solve this problem would be to create a
> configuration to describe a "soft examination sequence" for the
> programmable hardware "soft parser", which identifies the DSA tag and
Yeah I know you can do that. It is a very flexible chip that can do all
kinds of fancy stuff...
> skips over a programmable number of octets. This allows you to be able
> to continue to do things such as flow steering based on IP headers
> located after the DSA tag, etc. This is not supported in the upstream
> FMan driver however, neither the soft parser itself nor an abstraction
> for making DSA masters DSA-aware. I think it would also require more
...but this is the problem. These accelerators are always guarded by
NDAs and proprietary code. If NXP could transpile XDP to dpaa/dpaa2 in
the kernel like how Netronome does it, we would never even talk to
another SoC-vendor.
> work than it took me to hack up this patch. But at least, if I
> understand correctly, with a soft parser in place, the MAC error
> counters should at least stop incrementing, if that is of any importance
> to you.
This is the tragedy: I know for a fact that a DSA soft parser exists,
but because of the aforementioned maze of NDAs and license agreements
we, the community, cannot have nice things.
>> If so, would that mean that we would have to verify it in software?
>
> I don't see any place in the network stack that recalculates the FCS if
> NETIF_F_RXALL is set. Additionally, without NETIF_F_RXFCS, I don't even
> know how could the stack even tell a packet with bad FCS apart from one
> with good FCS. If NETIF_F_RXALL is set, then once a packet is received,
> it's taken for granted as good.
Right, but there is a difference between a user explicitly enabling it
on a device and us enabling it because we need it internally in the
kernel.
In the first scenario, the user can hardly complain as they have
explicitly requested to see all packets on that device. That would not
be true in the second one because there would be no way for the user to
turn it off. It feels like you would end up in a similar situation as
with the user- vs. kernel- promiscuous setting.
It seems to me if we enable it, we are responsible for not letting crap
through to the port netdevs.
> There is a separate hardware bit to include the FCS in the RX buffer, I
> don't think this is what you want/need.
>
>> >>
>> >> As a workaround, switching to EDSA (thereby always having a proper
>> >> EtherType in the frame) solves the issue.
>> >
>> > So basically every user needs to change the tag protocol manually to be
>> > able to receive from port 8? Not sure if that's too friendly.
>>
>> No it is not friendly at all. My goal was to add it as a device-tree
>> property, but for reasons I will detail in my answer to Andrew, I did
>> not manage to figure out a good way to do that. Happy to take
>> suggestions.
>
> My two cents here are that you should think for the long term. If you
> need it due to a limitation which you have today but might no longer
> have tomorrow, don't put it in the device tree unless you want to
> support it even when you don't need it anymore.
Powered by blists - more mailing lists