lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 23 May 2009 17:35:25 +0200
From:	Jarek Poplawski <jarkao2@...il.com>
To:	Vladimir Ivashchenko <hazard@...ncoudi.com>
Cc:	Eric Dumazet <dada1@...mosbay.com>, netdev@...r.kernel.org
Subject: Re: HTB accuracy for high speed (and bonding)

On Sat, May 23, 2009 at 06:06:30PM +0300, Vladimir Ivashchenko wrote:
> > > So, I got rid of bonding completely and instead configured PBR on Cisco
> > > + Linux routing in such a way so that packet gets received and
> > > transmitted using NICs connected to the same pair of cores with common
> > > cache. 65-70% idle on all cores now, compared to 0-30% idle in worst
> > > case scenarios before.
> > 
> > As a matter of fact I don't understand this bonding idea vs. smp: I
> > guess Eric Dumazet wrote why it's wrong wrt. locking. I'm not an smp
> > expert but I think the most efficient use is with separate NICs per
> > cpu (so with separate HTB qdiscs if possible), or multiqueue NICs -
> 
> I tried the following scenario: 2 NICs used for receive + another 2 NICs 
> used for transmit having HTB. Each NIC on a separate core. No bonding, 
> just manual load balancing using IP routing.
> 
> The result was that RX cores would be 20% and 40% idle respectively, even 
> though the amount of traffic they were receiving was roughly the same. 
> The TX cores were idling at around 90%. 

There is not enough data to analyse this, but generally you should aim
at maintaining one flow (RX + TX) on the same cpu cache.

Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists