[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <542D3D5F.5090900@mellanox.com>
Date: Thu, 2 Oct 2014 14:56:15 +0300
From: Amir Vadai <amirv@...lanox.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Or Gerlitz <gerlitz.or@...il.com>,
Alexei Starovoitov <ast@...mgrid.com>,
"David S. Miller" <davem@...emloft.net>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
John Fastabend <john.r.fastabend@...el.com>,
Linux Netdev List <netdev@...r.kernel.org>,
"Or Gerlitz" <or.gerlitz@...il.com>, <amira@...lanox.com>,
<idos@...lanox.com>, Yevgeny Petrilin <yevgenyp@...lanox.com>,
<eyalpe@...lanox.com>
Subject: Re: [PATCH v2 net-next] mlx4: optimize xmit path
On 10/2/2014 2:45 PM, Eric Dumazet wrote:
> On Thu, 2014-10-02 at 11:03 +0300, Amir Vadai wrote:
>
>> Hi,
>>
>> Will take it into the split patchset - we just hit this bug when tried
>> to run benchmarks with blueflame disabled (easy to test by using ethtool
>> priv flag blueflame).
>
> Hmm, I do not know this ethtool command, please share ;)
$ ethtool --set-priv-flags eth0 blueflame off
>
>>
>> I'm still working on it, but I can't reproduce the numbers that you
>> show. On my development machine, I get ~5.5Mpps with burst=8 and ~2Mpps
>> with burst=1.
>
> You have to be careful with the 'clone X' : If you choose a too big
> value, TX completion competes with the sender thread.
>
>>
>> In addition, I see no improvements when adding the optimization to the
>> xmit path.
After making sure the sender thread and the TX completions are not on
the same CPU, I see the expected improvement. +0.5Mpps with tx
optimizations.
>> I use the net-next kernel + pktgen burst support patch, with and without
>> this xmit path optimization patch.
>>
>> Do you use other patches not upstream in your environment?
>
> Nope, this is with David net-next tree.
>
>> Can you share the .config/pktgen configuration?
>
> Sure.
>
>>
>> One other note: we're checking now that blueflame could be used with
>> xmit_more. It might result with packets reordering/drops. Still under
>> investigation.
>
> I noticed no reorders. I tweaked the stack to force a gso segmentation
> (in software) instead of using NIC TSO for small packets (2 or 3 MSS)
>
> 200 concurrent netperf -t TCP_RR -- -r 2000,2000 performance was
> increased by ~100%.
>
>
> #!/bin/bash
> #
> # on the destination, drop packets with
> # iptables -A PREROUTING -t raw -p udp --dport 9 -j DROP
> # Or run a recent enough kernel with global ICMP rate limiting to 1000 packets/sec
> # ( http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=4cdf507d54525842dfd9f6313fdafba039084046 )
> #
> #### Configure
>
> # Yeah, if you use PKTSIZE <= 104, performance is lower because of inline (copy whole frame content into tx desc)
> PKTSIZE=105
You can also set the module parameter to turn it off:
$ modprobe mlx4_en inline_thold=17
>
[...]
Thanks,
Amir
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists