lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Sep 2008 12:18:41 -0600
From:	Matthew Wilcox <matthew@....cx>
To:	David Miller <davem@...emloft.net>
Cc:	netdev@...r.kernel.org, Arjan van de Ven <arjan@...radead.org>
Subject: Re: multiqueue interrupts...

On Thu, Sep 18, 2008 at 7:38 PM, David Miller wrote:
> During kernel summit I was speaking with Arjan van de Ven
> about irqbalanced and networking card multiqueue interrupts.
> 
> In order for irqbalanaced to make smart decisions, what needs to
> happen in drivers is that the individual interrupts need to be
> named in such a way that he can tell by looking at /proc/interrupts
> output that these interrupts are related.
> 
> So on a multiqueue card with 2 RX queues and 2 TX queues we'd
> have names like:
> 
>        eth0-rx-0
>        eth0-rx-1
>        eth0-tx-0
>        eth0-tx-1
> 
> So let's make an effort to get this done right in 2.6.28 and meanwhile
> Arjan can add the irqbalanced code.

Instead of having magic names, how about we put something in
/proc/irq/nnn/ that lets us tell which interrupts are connected to which
queues?

Another idea I've been thinking about is a flag to tell irqbalance to
leave stuff alone, and we just set stuff up right the first time.

We were discussing various options around multiqueue at first the scsi
multiqueue BOF and later at the PCI MSI BOF.  There's a general feeling
that drivers should be given some guidance about how many queues they
should be enabling, and the sysadmin needs to be the one telling the
PCI layer, which drivers should then query.  The use cases vary wildly
depending whether you're doing routing or are an end node, whether
you're doing v12n or NUMA or both and on just how many cards and cpus
you have.

In a storage / NUMA configuration we really want to set up one queue per
cpu / package / node (depending on resource constraints) and know that
the interrupt is going to come back to the same cpu / package / node.
We definitely don't want irqbalanced moving the interrupt around.

-- 
Matthew Wilcox				Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ