[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2acbd3e40809191524w346a66abp230f334009840ba7@mail.gmail.com>
Date: Fri, 19 Sep 2008 17:24:00 -0500
From: "Andy Fleming" <afleming@...il.com>
To: "Arjan van de Ven" <arjan@...radead.org>
Cc: "Matthew Wilcox" <matthew@....cx>,
"David Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: multiqueue interrupts...
On Fri, Sep 19, 2008 at 5:11 PM, Arjan van de Ven <arjan@...radead.org> wrote:
> On Fri, 19 Sep 2008 12:18:41 -0600
> Matthew Wilcox <matthew@....cx> wrote:
>> In a storage / NUMA configuration we really want to set up one queue
>> per cpu / package / node (depending on resource constraints) and know
>> that the interrupt is going to come back to the same cpu / package /
>> node. We definitely don't want irqbalanced moving the interrupt
>> around.
>
> irqbalance is NUMA aware and places a penalty on placing an interrupt
> "wrongly". We can argue on how strong this penalty should be, but
> thinking that irqbalance doesn't use the numa info the kernel exposes
> is incorrect.
>
I'm only just now wading into this area, but I thought one of the
advantages of multiple hardware queues was that we don't have to worry
about multiple cpus trying to access the buffer rings at the same
time, thus eliminating locking. If the driver can't rely on that,
don't we lose that advantage?
Andy
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists