[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49e0cf25-331b-4d26-8d9a-66434e7a270e@lunn.ch>
Date: Tue, 11 Apr 2023 04:19:02 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Liang Li <liali@...hat.com>
Cc: j.vosburgh@...il.com, vfalico@...il.com, andy@...yhouse.net,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
Paolo Abeni <pabeni@...hat.com>, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Hangbin Liu <haliu@...hat.com>,
"Toppins, Jonathan" <jtoppins@...hat.com>
Subject: Re: [Question] About bonding offload
On Tue, Apr 11, 2023 at 09:47:14AM +0800, Liang Li wrote:
> Hi Everyone,
>
> I'm a redhat network-qe and am testing bonding offload. e.g. gso,tso,gro,lro.
> I got two questions during my testing.
>
> 1. The tcp performance has no difference when bonding GRO is on versus off.
> When testing with bonding, I always get ~890 Mbits/sec bandwidth no
> matter whether GRO is on.
> When testing with a physical NIC instead of bonding on the same
> machine, with GRO off, I get 464 Mbits/sec bandwidth, with GRO on, I
> get 897 Mbits/sec bandwidth.
> So looks like the GRO can't be turned off on bonding?
>
> I used iperf3 to test performance.
> And I limited iperf3 process cpu usage during my testing to simulate a
> cpu bottleneck.
> Otherwise it's difficult to see bandwidth differences when offload is
> on versus off.
>
> I reported a bz for this: https://bugzilla.redhat.com/show_bug.cgi?id=2183434
>
> 2. Should bonding propagate offload configuration to slaves?
> For now, only "ethtool -K bond0 lro off" can be propagated to slaves,
> others can't be propagated to slaves, e.g.
> ethtool -K bond0 tso on/off
> ethtool -K bond0 gso on/off
> ethtool -K bond0 gro on/off
> ethtool -K bond0 lro on
> All above configurations can't be propagated to bonding slaves.
>
> I reports a bz for this: https://bugzilla.redhat.com/show_bug.cgi?id=2183777
>
> I am using the RHEL with kernel 4.18.0-481.el8.x86_64.
Hi Liang
Can you reproduce these issues with a modern kernel? net-next, or 6.3?
The normal process for issues like this is to investigate with the
latest kernel, and then backport fixes to old stable kernels.
Andrew
Powered by blists - more mailing lists