lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Sep 2011 10:14:46 -0700
From:	Alexander Duyck <alexander.h.duyck@...el.com>
To:	"J.Hwan Kim" <frog1120@...il.com>
CC:	netdev <netdev@...r.kernel.org>
Subject: Re: intel 82599 multi-port performance

On 09/26/2011 05:45 PM, J.Hwan Kim wrote:
> On 2011년 09월 27일 01:04, Alexander Duyck wrote:
>> On 09/26/2011 08:42 AM, J.Hwan.Kim wrote:
>>> On 2011년 09월 26일 23:20, Chris Friesen wrote:
>>>> On 09/26/2011 04:26 AM, J.Hwan Kim wrote:
>>>>> Hi, everyone
>>>>>
>>>>> Now, I'm testing a network card including intel 82599.
>>>>> In our experiment, with the driver modified with ixgbe and multi-port
>>>>> enabled,
>>>>
>>>> What do you mean by "modified with ixgbe and multi-port enabled"? You
>>>> shouldn't need to do anything special to use both ports.
>>>>
>>>>> rx performance of each port with 10Gbps of 64bytes frame is
>>>>> a half than when only 1 port is used.
>>>>
>>>> Sounds like a cpu limitation. What is your cpu usage? How are your
>>>> interrupts routed? Are you using multiple rx queues?
>>>>
>>>
>>> Our server is XEON 2.4GHz with 8 cores.
>>> I'm using 4 RSS queues for each port and distributed it's interrupts 
>>> to different cores respectively.
>>> I checked the CPU utilization with TOP, I guess ,it is not cpu 
>>> imitation problem.
>>
>> What kind of rates are you seeing on a single port versus multiple 
>> ports?  There are multiple possibilities in terms of what could be 
>> limiting your performance.
>>
>
> I tested the 10G - 64byte frames.
> With ixgbe-modified driver, in single port, 92% of packet received in 
> driver level and in 2 port we received around 42% packets.

When you say 92% of packets are received are you talking about 92% of 
line rate which would be somewhere around 14.8Mpps?

>> It sounds like you are using a single card, would that be correct?
>
> Yes, I tested a single card with 2 ports.
>
>> If you are running close to line rate on both ports this could be 
>> causing you to saturate the PCIe x8 link.  If you have a second card 
>> available you may want to try installing that in a secondary Gen2 
>> PCIe slot and seeing if you can improve the performance by using 2 
>> PCIe slots instead of one.
>
> I tested it also, if it is tested with 2 card, it seems that the 
> performance of each port is almost same with a single port. (maximum 
> performance)

This more or less confirms what I was thinking.  You are likely hitting 
the PCIe limits of the adapters.  The overhead for 64 byte packets is 
too great and as a result you are exceeding the PCIe bandwidth available 
to the adapter.  In order to achieve line  rate on both ports you would 
likely need to increase your packet size to something along the lines of 
256 bytes so that the additional PCIe overhead only contributes 50% or 
less to the total PCIe traffic across the bus.  Then the 2.5Gb/s of 
network traffic should consume less than 4.0GT/s of PCIe traffic.

Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists