[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACiydbJV5djpKdsgnKTDfxHpC0aPhF-bdH=zVzGeo9x127RFJg@mail.gmail.com>
Date: Thu, 19 Jan 2017 12:55:16 +0200
From: Roman Yeryomin <leroi.lists@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [net, 3/6] net: korina: increase tx/rx ring sizes
On 17 January 2017 at 21:20, Eric Dumazet <eric.dumazet@...il.com> wrote:
> On Tue, 2017-01-17 at 20:27 +0200, Roman Yeryomin wrote:
>> On 17 January 2017 at 19:58, David Miller <davem@...emloft.net> wrote:
>> > From: Roman Yeryomin <leroi.lists@...il.com>
>> > Date: Tue, 17 Jan 2017 19:32:36 +0200
>> >
>> >> Having larger ring sizes almost eliminates rx fifo overflow, thus improving performance.
>> >> This patch reduces rx overflow occurence by approximately 1000 times (from ~25k down to ~25 times per 3M frames)
>> >
>> > Those numbers don't mean much without full context.
>> >
>> > What kind of system, what kind of traffic, and over what kind of link?
>>
>> MIPS rb532 board, TCP iperf3 test over 100M link, NATed speed ~55Mbps.
>> I can do more tests and provide more precise numbers, if needed.
>
> Note that at 100M, 64 rx descriptors have a 8 ms max latency.
>
> Switching to 256 also multiply by 4 the latency -> 32 ms latency.
>
> Presumably switching to NAPI and GRO would avoid the latency increase
> and save a lot of cpu cycles for a MIPS board.
>
Eric, thanks for suggesting GRO, it gives huge performance gain when
receiving locally (55->95Mbps) and more than 25% gain for NAT
(55->70Mbps).
Also reading the datasheet more carefully I see that device rx
descriptor status flags are interpreted incorrectly. So will provide
an updated set.
Thanks for feedback!
Regards,
Roman
Powered by blists - more mailing lists