lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Apr 2010 14:53:04 +0200
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Paweł Staszewski <pstaszewski@...are.pl>
Cc:	Changli Gao <xiaosuo@...il.com>,
	Benny Amorsen <benny+usenet@...rsen.dk>,
	zhigang gong <zhigang.gong@...il.com>, netdev@...r.kernel.org
Subject: Re: Strange packet drops with heavy firewalling

Le mardi 13 avril 2010 à 14:33 +0200, Paweł Staszewski a écrit :
> W dniu 2010-04-13 01:18, Changli Gao pisze:
> > On Tue, Apr 13, 2010 at 1:06 AM, Benny Amorsen<benny+usenet@...rsen.dk>  wrote:
> >    
> >>   99:         24    1306226          3          2   PCI-MSI-edge      eth1-tx-0
> >>   100:      15735    1648774          3          7   PCI-MSI-edge      eth1-tx-1
> >>   101:          8         11          9    1083022   PCI-MSI-edge      eth1-tx-2
> >>   102:          0          0          0          0   PCI-MSI-edge      eth1-tx-3
> >>   103:         18         15       6131    1095383   PCI-MSI-edge      eth1-rx-0
> >>   104:        217         32      46544    1335325   PCI-MSI-edge      eth1-rx-1
> >>   105:        154    1305595        218         16   PCI-MSI-edge      eth1-rx-2
> >>   106:         17         16       8229    1467509   PCI-MSI-edge      eth1-rx-3
> >>   107:          0          0          1          0   PCI-MSI-edge      eth1
> >>   108:          2         14         15    1003053   PCI-MSI-edge      eth0-tx-0
> >>   109:       8226    1668924        478        487   PCI-MSI-edge      eth0-tx-1
> >>   110:          3    1188874         17         12   PCI-MSI-edge      eth0-tx-2
> >>   111:          0          0          0          0   PCI-MSI-edge      eth0-tx-3
> >>   112:        203        185       5324    1015263   PCI-MSI-edge      eth0-rx-0
> >>   113:       4141    1600793        153        159   PCI-MSI-edge      eth0-rx-1
> >>   114:      16242    1210108        436       3124   PCI-MSI-edge      eth0-rx-2
> >>   115:        267       4173      19471    1321252   PCI-MSI-edge      eth0-rx-3
> >>   116:          0          1          0          0   PCI-MSI-edge      eth0
> >>
> >>
> >> irqbalanced seems to have picked CPU1 and CPU3 for all the interrupts,
> >> which to my mind should cause the same problem as before (where CPU1 and
> >> CPU3 was handling all packets). Yet the box clearly works much better
> >> than before.
> >>      
> > irqbalanced? I don't think it can work properly. Try RPS in netdev and
> > linux-next tree, and if cpu load isn't even, try this patch:
> > http://patchwork.ozlabs.org/patch/49915/ .
> >
> >
> >    
> Yes without irqbalance - and with irq affinity set by hand router will 
> work much better.
> 
> But I don't think that RPS will help him - I make some tests with RPS 
> and AFFINITY - results in attached file.
> Test router make traffic management (hfsc) for almost 9k users

Thanks for sharing Pawel.

But obviously you are mixing apples and oranges.

 Are you aware that HFSC and other trafic shapers do serialize access to
data structures ? If many cpus try to access these structures in //, you
have a lot of cache line misses. HFSC is a real memory hog :(

Benny do have firewalling (highly parallelized these days, iptables was
well improved in this area), but no traffic control.

Anyway, Benny has now multiqueue devices, and therefore RPS will not
help him. I suggested RPS before his move to multiqueue, and multiqueue
is the most sensible way to improve things, when no central lock is
used. Every cpu can really work in //.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ