lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090319071517.GA7389@ioremap.net>
Date:	Thu, 19 Mar 2009 10:15:18 +0300
From:	Evgeniy Polyakov <zbr@...emap.net>
To:	Gregory Haskins <ghaskins@...ell.com>
Cc:	David Miller <davem@...emloft.net>, vernux@...ibm.com,
	andi@...stfloor.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
	Patrick Mullaney <pmullaney@...ell.com>
Subject: Re: High contention on the sk_buff_head.lock

On Wed, Mar 18, 2009 at 05:54:04PM -0400, Gregory Haskins (ghaskins@...ell.com) wrote:
> Note that -rt doesnt typically context-switch under contention anymore
> since we introduced adaptive-locks.  Also note that the contention
> against the lock is still contention, regardless of whether you have -rt
> or not.  Its just that the slow-path to handle the contended case for
> -rt is more expensive than mainline.  However, once you have the
> contention as stated, you have already lost.
> 
> We have observed the posters findings ourselves in both mainline and
> -rt.  I.e. That lock doesnt scale very well once you have more than a
> handful of cores.  It's certainly a great area to look at for improving
> the overall stack, IMO, as I believe there is quite a bit of headroom
> left to be recovered that is buried there.

Something tells me that you observer skb head lock contention because
you stressed the network and anything else on that machine slacked in
sleep. What if you will start IO stress, will __queue_lock contention
have the same magnitude order? Or having as many processes as skbs each
of which will race for the scheduler? Will run-queue lock show up in the
stats?

I believe the asnwer is yes for all the questions.
You stressed one subsystem and it showed up in the statistics.

-- 
	Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ