[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2861b6ca-4b65-4500-addf-ca13b415a56f@redhat.com>
Date: Thu, 28 Aug 2025 14:50:15 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Alexander Duyck <alexander.duyck@...il.com>, netdev@...r.kernel.org
Cc: kuba@...nel.org, kernel-team@...a.com, andrew+netdev@...n.ch,
davem@...emloft.net
Subject: Re: [net-next PATCH 0/4] fbnic: Synchronize address handling with BMC
On 8/28/25 12:46 PM, Paolo Abeni wrote:
> On 8/26/25 9:44 PM, Alexander Duyck wrote:
>> The fbnic driver needs to communicate with the BMC if it is operating on
>> the RMII-based transport (RBT) of the same port the host is on. To enable
>> this we need to add rules that will route BMC traffic to the RBT/BMC and
>> the BMC and firmware need to configure rules on the RBT side of the
>> interface to route traffic from the BMC to the host instead of the MAC.
>>
>> To enable that this patch set addresses two issues. First it will cause the
>> TCAM to be reconfigured in the event that the BMC was not previously
>> present when the driver was loaded, but the FW sends a notification that
>> the FW capabilities have changed and a BMC w/ various MAC addresses is now
>> present. Second it adds support for sending a message to the firmware so
>> that if the host adds additional MAC addresses the FW can be made aware and
>> route traffic for those addresses from the RBT to the host instead of the
>> MAC.
>
> The CI is observing a few possible leaks on top of this series:
>
> unreferenced object 0xffff888011146040 (size 216):
> comm "napi/enp1s0-0", pid 4116, jiffies 4295559830
> hex dump (first 32 bytes):
> c0 bc a0 08 80 88 ff ff 00 00 00 00 00 00 00 00 ................
> 00 40 02 08 80 88 ff ff 00 00 00 00 00 00 00 00 .@..............
> backtrace (crc d10d3409):
> kmem_cache_alloc_bulk_noprof+0x115/0x160
> napi_skb_cache_get+0x423/0x750
> napi_build_skb+0x19/0x210
> xdp_build_skb_from_buff+0xda/0x820
> fbnic_run_xdp+0x36c/0x550
> fbnic_clean_rcq+0x540/0x1790
> fbnic_poll+0x142/0x290
> __napi_poll.constprop.0+0x9f/0x460
> napi_threaded_poll_loop+0x44d/0x610
> napi_threaded_poll+0x17/0x30
> kthread+0x37b/0x5f0
> ret_from_fork+0x240/0x320
> ret_from_fork_asm+0x11/0x20
> unreferenced object 0xffff888008a0bcc0 (size 216):
> comm "napi/enp1s0-0", pid 4116, jiffies 4295560865
> hex dump (first 32 bytes):
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> 00 40 02 08 80 88 ff ff 00 00 00 00 00 00 00 00 .@..............
> backtrace (crc d69e2bd9):
> kmem_cache_alloc_node_noprof+0x289/0x330
> __alloc_skb+0x20f/0x2e0
> __tcp_send_ack.part.0+0x68/0x6b0
> tcp_rcv_established+0x69c/0x2340
> tcp_v6_do_rcv+0x9b4/0x1370
> tcp_v6_rcv+0x1bc5/0x2f90
> ip6_protocol_deliver_rcu+0x112/0x1140
> ip6_input+0x201/0x5e0
> ip6_sublist_rcv_finish+0x91/0x260
> ip6_list_rcv_finish.constprop.0+0x55b/0xa10
> ipv6_list_rcv+0x318/0x4b0
> __netif_receive_skb_list_core+0x4c6/0x980
> netif_receive_skb_list_internal+0x63c/0xe50
> gro_complete.constprop.0+0x54d/0x750
> __gro_flush+0x14a/0x490
> __napi_poll.constprop.0+0x319/0x460
>
> But AFAICS they don't look related to the changes in this series,
I went over the series with more attention, I'm reasonably sure the leak
are unrelated. Possibly is kmemleak fouled by some unfortunate timing?
In any case I'm applying this series now.
/P
Powered by blists - more mailing lists