[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120803133407.GA5736@oc1711230544.ibm.com>
Date: Fri, 3 Aug 2012 10:34:07 -0300
From: Thadeu Lima de Souza Cascardo <cascardo@...ux.vnet.ibm.com>
To: Yevgeny Petrilin <yevgenyp@...lanox.com>
Cc: "David S. Miller" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Or Gerlitz <ogerlitz@...lanox.com>
Subject: Re: [PATCH] mlx4_en: add UFO support
On Fri, Aug 03, 2012 at 08:29:26AM +0000, Yevgeny Petrilin wrote:
> >
> > Mellanox Ethernet adapters support Large Segmentation Offload for UDP
> > packets. The only change needed is using the proper header size when the
> > packet is UDP instead of TCP.
> >
> > This significantly increases performance for large UDP packets on platforms
> > which have an expensive dma_map call, like pseries.
> >
> > On a simple test with 64000 payload size, throughput has increased from
> > about 6Gbps to 9.5Gbps, while CPU use dropped from about 600% to about
> > 80% or less, on a 8-core Power7 machine.
> >
> Hi Thadeu,
> Can you please send the info regarding the adapter you are testing with? What test are you running?
> I just tried this patch with netperf on my x86_64, and it doesn't work. Packets are not fragmented properly (fragment offsets are not calculated).
> It is true that the TX side doesn't work as hard (OS doesn't need to do the fragmentation), but traffic is not sent properly on the wire.
>
> I'll do further investigation and get back with more details.
>
> Yevgeny
>
Hi, Yevgeny.
At first, I only added the UFO feature. When testing that, I got lots of
errors on the receiving end, like:
UDP: short packet: From 10.0.0.2:0 0/1480 to 10.0.0.3:0
After finding out what the driver was writing to the LSO descriptor, it
was obvious why this happened. The driver was using as the header size
the TCP header size, which would use a value from the UDP packet
payload.
After the other change, however, everything should work fine. I ran a
uperf test with 64000-sized payload packets and everything seemed to
work fine.
The card I have in here is:
0001:01:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
Subsystem: Mellanox Technologies Device 0016
Flags: bus master, fast devsel, latency 0, IRQ 17
Memory at 3da0fbe00000 (64-bit, non-prefetchable) [size=1M]
Memory at 3da0fc000000 (64-bit, prefetchable) [size=32M]
Expansion ROM at 3da0fbf00000 [disabled] [size=1M]
Capabilities: [40] Power Management version 3
Capabilities: [48] Vital Product Data
Capabilities: [9c] MSI-X: Enable+ Count=128 Masked-
Capabilities: [60] Express Endpoint, MSI 00
Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
Capabilities: [148] Device Serial Number 00-02-c9-03-00-4b-97-c4
Kernel driver in use: mlx4_core
Kernel modules: mlx4_core
I will try some other tests in here and report my results.
Regards.
Cascardo.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists