[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y2EqgyAChS1/6VqP@Laptop-X1>
Date: Tue, 1 Nov 2022 22:17:39 +0800
From: Hangbin Liu <liuhangbin@...il.com>
To: Jay Vosburgh <jay.vosburgh@...onical.com>
Cc: netdev@...r.kernel.org, "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jonathan Toppins <jtoppins@...hat.com>,
Paolo Abeni <pabeni@...hat.com>,
David Ahern <dsahern@...il.com>, Liang Li <liali@...hat.com>
Subject: Re: [PATCH net] bonding: fix ICMPv6 header handling when receiving
IPv6 messages
On Tue, Nov 01, 2022 at 09:39:22PM +0800, Hangbin Liu wrote:
> > I don't understand this explanation, as ipv6_gro_receive() isn't
> > called directly by the device drivers, but from within the GRO
> > processing, e.g., by dev_gro_receive().
> >
> > Could you explain how the call paths actually differ?
>
> Er..Yes, it's a little weird.
>
> I checked if the transport header is set before __netif_receive_skb_core().
> The bnx2x driver set it while be2net does not. So the transport header is reset
> in __netif_receive_skb_core() with be2net.
>
> I also found ipv6_gro_receive() is called before bond_handle_frame() when
> receive NA message. Not sure which path it go through. I'm not very familiar
> with driver part. But I can do more investigating.
With dump_stack(), it shows bnx2x do calls ipv6_gro_receive().
PS: I only dump the stack when receive NA.
[ 65.537605] dump_stack_lvl+0x34/0x48
[ 65.541695] ipv6_gro_receive.cold+0x1b/0x3d
[ 65.546453] dev_gro_receive+0x16c/0x380
[ 65.550831] napi_gro_receive+0x64/0x210
[ 65.555206] bnx2x_rx_int+0x44c/0x820 [bnx2x]
[ 65.560100] bnx2x_poll+0xe5/0x1d0 [bnx2x]
[ 65.564687] __napi_poll+0x2c/0x160
[ 65.568579] net_rx_action+0x296/0x350
Thanks
Hangbin
Powered by blists - more mailing lists