lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20080919152844.4e5e26b5@infradead.org>
Date:	Fri, 19 Sep 2008 15:28:44 -0700
From:	Arjan van de Ven <arjan@...radead.org>
To:	"Andy Fleming" <afleming@...il.com>
Cc:	"Matthew Wilcox" <matthew@....cx>,
	"David Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: multiqueue interrupts...

On Fri, 19 Sep 2008 17:24:00 -0500
"Andy Fleming" <afleming@...il.com> wrote:

> On Fri, Sep 19, 2008 at 5:11 PM, Arjan van de Ven
> <arjan@...radead.org> wrote:
> > On Fri, 19 Sep 2008 12:18:41 -0600
> > Matthew Wilcox <matthew@....cx> wrote:
> 
> >> In a storage / NUMA configuration we really want to set up one
> >> queue per cpu / package / node (depending on resource constraints)
> >> and know that the interrupt is going to come back to the same
> >> cpu / package / node. We definitely don't want irqbalanced moving
> >> the interrupt around.
> >
> > irqbalance is NUMA aware and places a penalty on placing an
> > interrupt "wrongly". We can argue on how strong this penalty should
> > be, but thinking that irqbalance doesn't use the numa info the
> > kernel exposes is incorrect.
> >
> 
> I'm only just now wading into this area, but I thought one of the
> advantages of multiple hardware queues was that we don't have to worry
> about multiple cpus trying to access the buffer rings at the same
> time, thus eliminating locking.  If the driver can't rely on that,
> don't we lose that advantage?

that's only true if you have at least the amount of queues as you have
logical cpus. Ask SGI about how many cpus they have in 3 years, and
then ask your NIC vendor how many queues they have planned for ;-)

and a per-cpu lock isn't really all THAT expensive.
the really big advantage is that you no longer cacheline bounce to hell
and back... and that you have either way.


-- 
Arjan van de Ven 	Intel Open Source Technology Centre
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ