lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <adamyfib173.fsf@cisco.com>
Date:	Sat, 29 Nov 2008 14:46:08 -0800
From:	Roland Dreier <rdreier@...co.com>
To:	Ben Hutchings <bhutchings@...arflare.com>
Cc:	Raz <raziebe@...il.com>, netdev@...r.kernel.org
Subject: Re: what is the expeted performance from a dual port 10G card ?

 > The 8b10 encoding is already accounted for in the 2.5 Gbps figure;
 > the raw bit rate is 3.125 Gbps.

No, this is absolutely incorrect for PCI Express.  PCI Express (1.0) raw
signaling is 2.5 gigatransfers/sec, and the overhead of 8b/10b encoding
reduces the data throughput to 2.0 gigabits/sec.  (You may be thinking
of the 10 gigabit ethernet XAUI standard, which uses 8b/10b encoding on
4 lanes running at 3.125 GT/sec to get 10 Gb/sec of data)

In addition, PCI Express transfers are broken up into packets, usually
with very small payloads (128 or 256 bytes are common).  So the packet
header overhead reduces throughput further, and then the transaction
layer adds further overhead.  Then transferring NIC control structures
over the link adds even more overhead.  So achieving 13.something Gb/sec
of real throughput on a PCIe link theoretically capable of 16 Gb/sec
seems pretty good to me.

There are motherboards with PCIe 2.0 slots running at 5.0 GT/sec
available (ie 32 Gb/sec of raw throughput), and I know the Mellanox
ConnectX NIC at least is capable of that as well, so you might be able
to get better performance on such a system.  However, you mention that
most of the NICs top out at 10 Gb/sec in your testing but the Intel NIC
goes higher.  So most likely you are hitting driver limits.  One common
issue is CPU affinity; you might want to play around with setting
interrupt affinity and/or pinning processes to a single CPU.  It might
also be interesting to profile where your CPU time is going.

 - R.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ