lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1298899971.2941.281.camel@edumazet-laptop>
Date:	Mon, 28 Feb 2011 14:32:51 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, rick.jones2@...com,
	therbert@...gle.com, wsommerfeld@...gle.com,
	daniel.baluta@...il.com, netdev@...r.kernel.org
Subject: Re: SO_REUSEPORT - can it be done in kernel?

Le lundi 28 février 2011 à 19:36 +0800, Herbert Xu a écrit :
> On Sun, Feb 27, 2011 at 07:06:14PM +0800, Herbert Xu wrote:
> > I'm working on this right now.
> 
> OK I think I was definitely on the right track.  With the send
> patch made lockless I now get numbers which are even better than
> those obtained with running named with multiple sockets.  That's
> right, a single socket is now faster than what multiple sockets
> were without the patch (of course, multiple sockets may still
> faster with the patch vs. a single socket for obvious reasons,
> but I couldn't measure any significant difference).
> 
> Also worthy of note is that prior to the patch all CPUs showed
> idleness (lazy bastards!), with the patch they're all maxed out.
> 
> In retrospect, the idleness was simply the result of the socket
> lock scheduling away and was an indication of lock contention.
> 

Now, input path can run without finding socket locked by xmit path, so
skb are queued into receive queue, not backlog one.

> Here are the patches I used.  Please don't them yet as I intend
> to clean them up quite a bit.
> 
> But please do test them heavily, especially if you have an AMD
> NUMA machine as that's where scalability problems really show
> up.  Intel tends to be a lot more forgiving.  My last AMD machine
> blew up years ago :)

I am going to test them, thanks !


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ