lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Feb 2010 08:39:07 -0700
From:	"Tadepalli, Hari K" <hari.k.tadepalli@...el.com>
To:	Jorrit Kronjee <j.kronjee@...opact.nl>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: stress testing netfilter's hashlimit

>> Each of these are separate machines. The sender has a Gigabit Ethernet
>> interface and sends ~410,000 packets per second (52 bytes Ethernet
>> frames). The bridge has two Gigabit Ethernet interfaces, a quad core
>> Xeon X3330 and is running Ubuntu 9.10 (Karmic Koala) with kernel
>> 2.6.31-19-generic-pae.

This is a Penryn class quad core processor, advertized at 2.66GHz. On this platform & with PCI express NICs, you can expect an IPv4 forwarding rate of ~1 Mpps per CPU core. Given the processing cost involved in forwarding/routing a packet, it is not possible to approach line rate forwarding line rate on stock kernels. Looks like, @ 52B packets, you are using a packet size that will NOT be sufficient to fill a full UDP header in the packet. 

Coming to BRIDGING: I have not worked on bridging, but have seen anecdotal evidence that bridging costs far more CPU cycles than routing/forwarding (on a per packet basis). What you are observing seems to align well with this. You can play with a few platform level tunings; setting interrupt affinity of each NIC port to align with adjacent processor pair, as in:

echo 1 > /proc/irq/22/smp_affinity
echo 2 > /proc/irq/23/smp_affinity

- assuming your NIC ports are assigned IRQs of 22 and 23 respectively. This will balance the traffic from each NIC to be handled by a different CPU core, while minimizing the impact of inter-cpu cache thrashes. 

- Hari

____________________________________
Intel/ Embedded Comms/ Chandler, AZ


-----Original Message-----
From: netdev-owner@...r.kernel.org [mailto:netdev-owner@...r.kernel.org] On Behalf Of Jorrit Kronjee
Sent: Monday, February 22, 2010 7:21 AM
To: netdev@...r.kernel.org
Subject: stress testing netfilter's hashlimit

Dear list,

I'm not entirely sure if this is the right list for this question, but
if someone could me give me some pointers where to ask otherwise, it
would be most appreciated.

We're trying to stress test netfilter's hashlimit module. To do so,
we've built the following setup.

[ sender ] --> [ bridge ] --> [ receiver ]

Each of these are separate machines. The sender has a Gigabit Ethernet
interface and sends ~410,000 packets per second (52 bytes Ethernet
frames). The bridge has two Gigabit Ethernet interfaces, a quad core
Xeon X3330 and is running Ubuntu 9.10 (Karmic Koala) with kernel
2.6.31-19-generic-pae. The receiver is a non-descript machine with a
Gigabit Ethernet interface and is not really important for my question.
We disabled connection tracking (because the packets are UDP) on the
bridge as follows:

# iptables -t raw -nL
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination        
NOTRACK    all  --  0.0.0.0/0            0.0.0.0/0          

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        
NOTRACK    all  --  0.0.0.0/0            0.0.0.0/0          

We used brctl to make a bridge between eth3 and eth4 (even though we
don't have a eth[0,1,2]):

# brctl show
bridge name    bridge id        STP enabled    interfaces
br1        8000.001517b30cb3    no        eth3
                            eth4


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ