lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 18 Mar 2009 14:51:16 -0700
From:	Vernon Mauery <vernux@...ibm.com>
To:	Eilon Greenstein <eilong@...adcom.com>
CC:	Andi Kleen <andi@...stfloor.org>, netdev <netdev@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	rt-users <linux-rt-users@...r.kernel.org>
Subject: Re: High contention on the sk_buff_head.lock

Eilon Greenstein wrote:
> On Wed, 2009-03-18 at 14:07 -0700, Vernon Mauery wrote:
>>> The real "fix" would be probably to use a multi queue capable NIC
>>> and a NIC driver that sets up multiple queues for TX (normally they
>>> only do for RX). Then cores or a set of cores (often the number
>>> of cores is larger than the number of NIC queues) could avoid this
>>> problem. Disadvantage: more memory use.
>> Hmmm.  So do either the netxen_nic or bnx2x drivers support multiple
>> queues?  (that is the HW that I have access to right now).  And do I
>> need to do anything to set them up?
>>
> The version of bnx2x in net-next support multi Tx queues (and Rx). It
> will open an equal number of Tx and Rx queues up to 16 or the number of
> cores in the system. You can validate that all queues are transmitting
> with "ethtool -S" which has per queue statistics in that version.

Thanks.  I will test to see how this affects this lock contention the
next time the broadcom hardware is available.

--Vernon
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists