lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 Jun 2012 21:43:35 +0100
From:	Ben Hutchings <bhutchings@...arflare.com>
To:	Yuval Mintz <yuvalmin@...adcom.com>
CC:	<netdev@...r.kernel.org>, <davem@...emloft.net>,
	<eilong@...adcom.com>, Divy Le Ray <divy@...lsio.com>,
	Or Gerlitz <ogerlitz@...lanox.com>,
	Jon Mason <jdmason@...zu.us>,
	Anirban Chakraborty <anirban.chakraborty@...gic.com>,
	Jitendra Kalsaria <jitendra.kalsaria@...gic.com>,
	Ron Mercer <ron.mercer@...gic.com>,
	Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
	Jon Mason <mason@...i.com>,
	Andrew Gallatin <gallatin@...i.com>,
	Sathya Perla <sathya.perla@...lex.com>,
	Subbu Seetharaman <subbu.seetharaman@...lex.com>,
	Ajit Khaparde <ajit.khaparde@...lex.com>,
	Matt Carlson <mcarlson@...adcom.com>,
	Michael Chan <mchan@...adcom.com>
Subject: Re: [RFC net-next 00/14] default maximal number of RSS queues in mq
 drivers

On Tue, 2012-06-19 at 18:13 +0300, Yuval Mintz wrote:
> Different vendors support different number of RSS queues by default. Today,
> there exists an ethtool API through which users can change the number of
> channels their driver supports; This enables us to pursue the goal of using
> a default number of RSS queues in various multi-queue drivers.
> 
> This RFC intendeds to achieve the above default, by upper-limiting the number
> of interrupts multi-queue drivers request (by default, not via the new API) 
> with correlation to the number of cpus on the machine.
> 
> After examining multi-queue drivers that call alloc_etherdev_mq[s],
> it became evident that most drivers allocate their devices using hard-coded
> values. Changing those defaults directly will most likely cause a regression. 
> 
> However, (most) multi-queue driver look at the number of online cpus when 
> requesting for interrupts. We assume that the number of interrupts the
> driver manages to request is propagated across the driver, and the number
> of RSS queues it configures is based upon it. 
> 
> This RFC modifies said logic - if the number of cpus is large enough, use
> a smaller default value instead. This serves 2 main purposes: 
>  1. A step forward unity in the number of RSS queues of various drivers.
>  2. It prevents wasteful requests for interrupts on machines with many cpus.
[...]
> Driver identified as multi-queue, no reference to number of online cpus found,
> and thus unhandled in this RFC:
[...]
> * sfc       efx
[...]

In sfc we currently look at the CPU topology to count cores instead of
threads.  The result is the same unless the system has hyperthreading
(or other SMT) enabled.

I've seen many diagnostic reports from customer support tickets where
there were 32 queue-sets and MSI-X vectors in use (the maximum currently
supported by the driver), but very few had a problem with that.

I would be interested in a scheme to use fewer queues for RSS but more
for flow steering (accelerated RFS, XPS and ethtool NFC).  We had some
discussion of this at last year's netconf but sadly I've not yet found
time to work on it.

Ben.

-- 
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ