[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250402100039.4cae8073@kernel.org>
Date: Wed, 2 Apr 2025 10:00:39 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: "Álvaro \"G. M.\"" <alvaro.gamez@...ent.com>
Cc: netdev@...r.kernel.org, Radhey Shyam Pandey
<radhey.shyam.pandey@....com>
Subject: Re: Issue with AMD Xilinx AXI Ethernet (xilinx_axienet) on
MicroBlaze: Packets only received after some buffer is full
+CC Radhey, maintainer of axienet
On Tue, 01 Apr 2025 12:52:15 +0200 Álvaro "G. M." wrote:
> Hello,
>
> I have a custom PCB board fitting a AMD/Xilinx Artix 7 FPGA with a Microblaze design
> inside that uses Xilinx' AXI 1G/2.5G Ethernet Subsystem connected via DMA.
>
> This board and HDL design have been tested and in production since 2016 using
> kernel 4.4.43 without any issue. The hardware part of the ethernet is DP83620
> running in 100base-FX mode, which back in the day required a small patch to
> dp83848.c from myself that has been in the kernel since.
>
> I am now trying to upgrade to a recent kernel (v6.13) and I'm facing some strange
> behavior of the ethernet system. The most probable cause is a misconfiguration
> on my part of the device tree, since things have changed since then and I've found
> the device tree documentation confusing, but I can't discard some kind of bug,
> for I have never seem something similar to this.
>
> Relevant boot messages:
>
> xilinx_axienet 40c00000.ethernet eth0: PHY [axienet-40c00000:01] driver [TI DP83620 10/100 Mbps PHY] (irq=POLL)
> xilinx_axienet 40c00000.ethernet eth0: configuring for phy/mii link mode
> xilinx_axienet 40c00000.ethernet eth0: Link is Up - 100Mbps/Half - flow control off
>
> Now, transmission from the Microblaze seems to work fine, but reception however does not.
> I run tcpdump on the Microblaze and I can see that there's some kind of buffering occuring,
> as a single ARP packet sent from my directly connected computer won't reach tcpdump unless
> I send also a big chunk of data via, for example, multicast, or after enough time of ping flooding.
>
> It's not however a matter of sending a big chunk of data at the beginning, it seems like the
> buffer empties once full and the process starts back again, so a single ping packet won't be
> received after the buffer has emptied.
>
> I can see that interrupts increase, but not as fast as they occur when using old kernel.
> For example, in the ping case, kernel 4.43 will notify that there was an interrupt
> for each single ping packet received with ping -c 1 (so no coalescing shenanigans can occur),
> but the new kernel won't show any increase in the number of interrupts, so it means
> that the DMA core is either not generating the irq for some reason or isn't even
> executing the DMA transfer at all.
>
> Output packets, however, do seem to be sent expeditely and received in my working computer
> as soon as I sent them from the Microblaze.
>
> I guess I may have made some mistake in upgrading the DTS to the new format, although
> I've tried the two available methods (either setting node "dmas" or using "axistream-connected"
> property) and both methods result in the same boot messages and behavior.
>
> By crafting properly sized UDP multicast packets (so I don't have to rely on ARP which isn't
> working due to timeouts), I've been able to determine I need to send 131072 bytes before
> reception can truly occur, although it somehow seems like sending multicast UDP
> packets won't trigger receiving IRQ unless I have a specific UDP listener program running on
> the Microblaze. I'm quite confused about that too.
>
> So please, if anyone could inspect the DTS for me and/or guide me on how to debug this, I'd be grateful.
>
> These are the relevant parts of the DTS for kernel 6.13, which I've hand crafted with help
> from Documentation/devicetree/bindings and peeking at xilinx_axienet_main.c:
>
>
> axi_ethernet_0_dma: dma@...00000 {
> compatible = "xlnx,axi-dma-1.00.a";
> #dma-cells = <1>;
> reg = <0x41e00000 0x10000>;
> interrupt-parent = <µblaze_0_axi_intc>;
> interrupts = <7 1 8 1>;
> xlnx,addrwidth = <32>;
> xlnx,datawidth = <32>;
> xlnx,include-sg;
> xlnx,sg-length-width = <16>;
> xlnx,include-dre = <1>;
> xlnx,axistream-connected = <1>;
> xlnx,irq-delay = <1>;
> dma-channels = <2>;
> clock-names = "s_axi_lite_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk", "m_axi_sg_aclk";
> clocks = <&clk_bus_0>, <&clk_bus_0>, <&clk_bus_0>, <&clk_bus_0>;
> dma-channel@...00000 {
> compatible = "xlnx,axi-dma-mm2s-channel";
> xlnx,include-dre = <1>;
> interrupts = <7 1>;
> xlnx,datawidth = <32>;
> };
> dma-channel@...00030 {
> compatible = "xlnx,axi-dma-s2mm-channel";
> xlnx,include-dre = <1>;
> interrupts = <8 1>;
> xlnx,datawidth = <32>;
> };
> };
> axi_ethernet_eth: ethernet@...00000 {
> compatible = "xlnx,axi-ethernet-1.00.a";
> reg = <0x40c00000 0x40000>, <0x41e00000 0x10000>;
> phy-handle = <&phy1>;
> xlnx,rxmem = <0x1000>;
> phy-mode = "mii";
> xlnx,txcsum = <0x2>;
> xlnx,rxcsum = <0x2>;
> clock-names = "s_axi_lite_clk", "axis_clk", "ref_clk", "mgt_clk";
> clocks = <&clk_bus_0>, <&clk_bus_0>, <&clk_bus_0>, <&clk_bus_0>;
> /* axistream-connected = <&axi_ethernet_0_dma>; */
> dmas = <&axi_ethernet_0_dma 0>, <&axi_ethernet_0_dma 1>;
> dma-names = "tx_chan0", "rx_chan0";
> mdio {
> #address-cells = <1>;
> #size-cells = <0>;
> phy1: ethernet-phy@1 {
> device_type = "ethernet-phy";
> reg = <1>;
> };
> };
> };
>
>
> And these are same parts of the DTS for kernel 4.43 which worked fine.
> These were created with help from Xilinx tools.
>
> axi_ethernet_0_dma: dma@...00000 {
> #dma-cells = <1>;
> compatible = "xlnx,axi-dma-1.00.a";
> interrupt-parent = <µblaze_0_axi_intc>;
> interrupts = <7 1 8 1>;
> reg = <0x41e00000 0x10000>;
> xlnx,include-sg ;
> dma-channel@...00000 {
> compatible = "xlnx,axi-dma-mm2s-channel";
> dma-channels = <0x1>;
> interrupts = <7 1>;
> xlnx,datawidth = <0x8>;
> xlnx,device-id = <0x0>;
> };
> dma-channel@...00030 {
> compatible = "xlnx,axi-dma-s2mm-channel";
> dma-channels = <0x1>;
> interrupts = <8 1>;
> xlnx,datawidth = <0x8>;
> xlnx,device-id = <0x0>;
> };
> };
> axi_ethernet_eth: ethernet@...00000 {
> axistream-connected = <&axi_ethernet_0_dma>;
> axistream-control-connected = <&axi_ethernet_0_dma>;
> clock-frequency = <83250000>;
> clocks = <&clk_bus_0>;
> compatible = "xlnx,axi-ethernet-1.00.a";
> device_type = "network";
> interrupt-parent = <µblaze_0_axi_intc>;
> interrupts = <3 0>;
> phy-mode = "mii";
> reg = <0x40c00000 0x40000>;
> xlnx = <0x0>;
> xlnx,axiliteclkrate = <0x0>;
> xlnx,axisclkrate = <0x0>;
> xlnx,gt-type = <0x0>;
> xlnx,gtinex = <0x0>;
> xlnx,phy-type = <0x0>;
> xlnx,phyaddr = <0x1>;
> xlnx,rable = <0x0>;
> xlnx,rxcsum = <0x2>;
> xlnx,rxlane0-placement = <0x0>;
> xlnx,rxlane1-placement = <0x0>;
> xlnx,rxmem = <0x1000>;
> xlnx,rxnibblebitslice0used = <0x1>;
> xlnx,tx-in-upper-nibble = <0x1>;
> xlnx,txcsum = <0x2>;
> xlnx,txlane0-placement = <0x0>;
> xlnx,txlane1-placement = <0x0>;
> phy-handle = <&phy0>;
> axi_ethernetlite_0_mdio: mdio {
> #address-cells = <1>;
> #size-cells = <0>;
> phy0: phy@1 {
> device_type = "ethernet-phy";
> reg = <1>;
> ti,rx-internal-delay = <7>;
> ti,tx-internal-delay = <7>;
> ti,fifo-depth = <1>;
> };
> };
> };
>
>
>
> Best regards,
>
Powered by blists - more mailing lists