lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1298675700.14113.213.camel@tardy>
Date:	Fri, 25 Feb 2011 15:15:00 -0800
From:	Rick Jones <rick.jones2@...com>
To:	Thomas Graf <tgraf@...radead.org>
Cc:	Tom Herbert <therbert@...gle.com>,
	Bill Sommerfeld <wsommerfeld@...gle.com>,
	Daniel Baluta <daniel.baluta@...il.com>, netdev@...r.kernel.org
Subject: Re: SO_REUSEPORT - can it be done in kernel?

On Fri, 2011-02-25 at 17:48 -0500, Thomas Graf wrote:
> On Fri, Feb 25, 2011 at 11:18:15AM -0800, Rick Jones wrote:
> > I think the idea is goodness, but will ask, was the (first) bottleneck
> > actually in the kernel, or was it in bind itself?  I've seen
> > single-instance, single-byte burst-mode netperf TCP_RR do in excess of
> > 300K transactions per second (with TCP_NODELAY set) on an X5560 core.
> > 
> > ftp://ftp.netperf.org/netperf/misc/dl380g6_X5560_rhel54_ad386_cxgb3_1.4.1.2_b2b_to_same_agg_1500mtu_20100513-2.csv
> > 
> > and that was with now ancient RHEL5.4 bits...  yes, there is a bit of
> > apples, oranges and kumquats but still, I am wondering if this didn't
> > also "work around" some internal BIND scaling issues as well.
> 
> Yes it is. We have observed two separate bottlenecks.
> 
> The first we have discovered is within BIND. As soon as more than 1
> worker thread is being used strace showed a ton of futex() system
> calls to the kernel as soon as the number of queries crossed a magic
> barrier. This suggested heavy lock contention within BIND.

The more things change, the more they remain the same, or perhaps "Code
may come and go, but lock contention is forever:

ftp://ftp.cup.hp.com/dist/networking/briefs/bind9_perf.txt

rick jones

The system ftp.cup.hp.com is probably going away before long, I will
probably put its collection of ancient writeups somewhere on netperf.org

> 
> This BIND lock contetion was not visible on all systems having scalability
> issues though. Some machines were not able to deliver enough queries to
> BIND in order for the lock contention to appear.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ