lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Sep 2008 13:57:58 -0700
From:	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>
To:	"Matthew Wilcox" <matthew@....cx>,
	"David Miller" <davem@...emloft.net>
Cc:	<netdev@...r.kernel.org>, "Arjan van de Ven" <arjan@...radead.org>,
	"Brice Goglin" <brice@...i.com>,
	"Ben Hutchings" <bhutchings@...arflare.com>
Subject: RE: multiqueue interrupts...

Matthew Wilcox wrote:
>> So let's make an effort to get this done right in 2.6.28 and
>> meanwhile Arjan can add the irqbalanced code.
 
> Another idea I've been thinking about is a flag to tell irqbalance to
> leave stuff alone, and we just set stuff up right the first time.

There is already this flag called IRQF_NOBALANCING, at least I think
that's what we want.  irqbalanced's treatment of this flag is another
matter.
 
> We were discussing various options around multiqueue at first the scsi
> multiqueue BOF and later at the PCI MSI BOF.  There's a general
> feeling that drivers should be given some guidance about how many
> queues they should be enabling, and the sysadmin needs to be the one
> telling the PCI layer, which drivers should then query.  The use
> cases vary wildly depending whether you're doing routing or are an
> end node, whether you're doing v12n or NUMA or both and on just how
> many cards and cpus you have.

not a bad, idea, but I can appreciate why DaveM thinks this is
un-necessary.  However all we are left with right now is code changes or
module parameters when trying to configure the number of queues.

How about some new ethtool options having to do with multiqueue
configurations?  Here is a proposal.  I haven't spent much time thinking
about this before but here is an idea.

query multiqueue capabilities:
ethtool -q ethX

set multiqueue capabilities:
ethtool -Q tx N rx N int <fixedcpu|pairs|somethingelse?>

tx N and rx N are pretty self explanitory
int fixedcpu - each queue gets a cpu and is registered IRQF_NOBALANCING
int pairs - tx and rx queues are allocated per cpu and (probably) share
a vector
There should be others here, I'm not sure how/if we would want to make a
pluggable way to do this within ethtool's design without opening buffer
overlow kinds of holes.
 
> In a storage / NUMA configuration we really want to set up one queue
> per cpu / package / node (depending on resource constraints) and know
> that the interrupt is going to come back to the same cpu / package /
> node. We definitely don't want irqbalanced moving the interrupt
> around. 

ethtool doesn't help storage, but I read this on netdev anyway...

Jesse
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ