lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAL8zT=hi9_Y4oGw=cVSnYE=km6MZBAAie-A5RWLy=47FR8aTag@mail.gmail.com> Date: Wed, 11 Jul 2012 15:41:55 +0200 From: Jean-Michel Hautbois <jhautbois@...il.com> To: Merav Sicron <meravs@...adcom.com> Cc: netdev <netdev@...r.kernel.org> Subject: Re: UDP ordering when using multiple rx queue 2012/7/11 Jean-Michel Hautbois <jhautbois@...il.com>: > 2012/7/11 Merav Sicron <meravs@...adcom.com>: >> On Wed, 2012-07-11 at 00:53 -0700, Jean-Michel Hautbois wrote: >> >>> Several tests lead to a simple conclusion : when the NIC has only one >>> RX queue, everything is ok (like be2net for instance), but when it has >>> more than one RX queue, then I can have "lost packets". >>> This is the case for bnx2x or mlx4 for instance. >> >From what you describe I assume that you use different source IP / >> destination IP in each packet - is this something that you can control? >> Because with the same IP addresses the traffic will be steered to the >> same queue. > > OK, sorry for not having explained that : the packets are multicast > with a port for each stream. Sending one stream multicast on a bnx2x > based NIC can lead to several queues used (two, for what I can see) > and then, to the problem reported. > >>> Here are my questions : >>> - Is it possible to force a driver to use only one rx queue, even if >>> it can use more without reloading the driver (and this is feasible >>> only when a parameter exists for that !) ? >> You can reduce the number of queues using "ethtool -L ethX combined 1". >> Note however that it will cause automatic driver unload/load. > > OK, thanks for this tip :). > > JM I confirm that using ethtool -L eth1 combined 1 solves my issue. I can have 3Gbps per sec with 5 multicast on 5 ports without any "packet loss" (again, for my application) and it uses one RX queue only (of course :)). One multicast (one port) but with the default combined=8 splits in two rx queues... Unicast traffic seems ok (I used netperf in order to check this assumption). JM -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists