[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50517F01.9050501@mellanox.com>
Date: Thu, 13 Sep 2012 09:36:49 +0300
From: Shlomo Pongartz <shlomop@...lanox.com>
To: Rick Jones <rick.jones2@...com>
CC: Eric Dumazet <eric.dumazet@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: GRO aggregation
On 9/12/2012 7:52 PM, Rick Jones wrote:
> On 09/12/2012 09:34 AM, Shlomo Pongartz wrote:
>> On 9/12/2012 7:23 PM, Rick Jones wrote:
>>> On 09/12/2012 07:41 AM, Shlomo Pongartz wrote:
>>>> Hi Eric
>>>>
>>>> The TSO is just a mean to create a burst of frames on the wire so the
>>>> NAPI will be able to pool as much as possible.
>>>
>>> Is it? If I recall correctly, TSO was in place well before all
>>> drivers were using NAPI. And NAPI was being proposed independent of
>>> TSO. TSO is there to save CPU cycles on the transmit side. "On the
>>> wire" what it sends is to be identical to what a host with greater CPU
>>> performance could accomplish.
>>>
>>> rick jones
>>>
>> Hi Rick.
>>
>> What I say is that I use TSO on the machine that transmits so I'll have
>> a burst of frames on the wire for the NAPI on the receiver machine.
>
> Also, NAPI was in place before GRO. IIRC, the napi code was simply a
> convenient/correct/natural place to have the GRO functionality.
>
> rick jones
>
Hi Rick
The thing is that napi_complete calls napi_gro_flush so this pose a
limit on the aggregation.
However when I count the number of packets received until this routine
is been called, I get a number,
which is bigger then what I see with tcpdump, and this number is less
than what is expected if the limit is 64K.
So I what to know what can I do in order to improve things, e.g.
allocate the skb differently.
Shlomo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists