[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1466638975.6850.102.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Wed, 22 Jun 2016 16:42:55 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Yuval Mintz <Yuval.Mintz@...gic.com>,
Manish Chopra <manish.chopra@...gic.com>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Ariel Elior <Ariel.Elior@...gic.com>,
Tom Herbert <tom@...bertland.com>,
Hannes Frederic Sowa <hannes@...hat.com>
Subject: Re: [PATCH net-next 0/5] qed/qede: Tunnel hardware GRO support
On Wed, 2016-06-22 at 14:32 -0700, Alexander Duyck wrote:
> The idea behind GRO was to make it so that we had a generic way to
> handle this in software. For the most part drivers doing LRO in
> software were doing the same thing that the GRO was doing. The only
> reason it was deprecated is because GRO was capable of doing more than
> LRO could since we add one parser and suddenly all devices saw the
> benefit instead of just one specific device. It is best to keep those
> two distinct solutions and then let the user sort out if they want to
> have the aggregation done by the device or the kernel.
Presumably we could add features flags to selectively enable part of LRO
(really simply GRO offloading) for NIC that partially match GRO
requirements.
Patch 5/5 seems to enable the hardware feature(s) :
+ p_ramrod->tpa_param.tpa_ipv4_tunn_en_flg = 1;
+ p_ramrod->tpa_param.tpa_ipv6_tunn_en_flg = 1;
So this NIC seems to have a way to control its LRO engine.
If some horrible bug happens for GRE+IPv6+TCP, you could disable LRO
coping with GRE encapsulation, instead of disabling whole LRO
We have software fallback. Nice, with quite heavy cpu cost.
If we can use offload without breaking rules, use it.
Some NIC have terrible LRO performance (I wont give details here),
but others are ok.
Some NIC have terrible TSO performance for small number of segments.
(gso_segs < 4)
Powered by blists - more mailing lists