[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1259092394.2631.64.camel@ppwaskie-mobl2>
Date: Tue, 24 Nov 2009 11:53:14 -0800
From: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
"peterz@...radead.org" <peterz@...radead.org>,
"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
"yong.zhang0@...il.com" <yong.zhang0@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"arjan@...ux.jf.intel.com" <arjan@...ux.jf.intel.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH] irq: Add node_affinity CPU masks for smarter
irqbalance hints
On Tue, 2009-11-24 at 11:01 -0800, Eric Dumazet wrote:
> Peter P Waskiewicz Jr a écrit :
>
> > That's exactly what we're doing in our 10GbE driver right now (isn't
> > pushed upstream yet, still finalizing our testing). We spread to all
> > NUMA nodes in a semi-intelligent fashion when allocating our rings and
> > buffers. The last piece is ensuring the interrupts tied to the various
> > queues all route to the NUMA nodes those CPUs belong to. irqbalance
> > needs some kind of hint to make sure it does the right thing, which
> > today it does not.
>
> sk_buff allocations should be done on the node of the cpu handling rx interrupts.
Yes, but we preallocate the buffers to minimize overhead when running
our interrupt routines. Regardless, whatever queue we're filling with
those sk_buff's has an interrupt vector attached. So wherever the
descriptor ring/queue and its associated buffers were allocated, that is
where the interrupt's affinity needs to be set to.
> For rings, I am ok for irqbalance and driver cooperation, in case admin
> doesnt want to change the defaults.
>
> >
> > I don't see how this is complex though. Driver loads, allocates across
> > the NUMA nodes for optimal throughput, then writes CPU masks for the
> > NUMA nodes each interrupt belongs to. irqbalance comes along and looks
> > at the new mask "hint," and then balances that interrupt within that
> > hinted mask.
>
> So NUMA policy is given by the driver at load time ?
I think it would have to. Nobody else has insight how the driver
allocated its resources. So the driver can be told where to allocate
(see below), or the driver needs to indicate upwards how it allocated
resources.
> An admin might chose to direct all NIC trafic to a given node, because
> its machine has mixed workload. 3 nodes out of 4 for database workload,
> one node for network IO...
>
> So if an admin changes smp_affinity, is your driver able to reconfigure itself
> and re-allocate all its rings to be on NUMA node chosen by admin ? This is
> what I qualify as complex.
No, we don't want to go this route of reallocation. This, I agree, is
very complex, and can be very devastating. We'd basically be resetting
the driver whenever an interrupt moved, so this could be a terrible DoS
vulnerability.
Jesse Brandeburg has a set of patches he's working on that will allow us
to bind an interface to a single node. So in your example of 3 nodes
for DB workload and 1 for network I/O, the driver can be loaded and
directly bound to that 4th node. Then the node_affinity mask would be
set by the driver for the CPU mask of that single node. But in these
deployments, a sysadmin changing affinity that will fly directly in the
face of how resources are laid out is poor system administration. I
know it will happen, but I don't know how far we need to protect the
sysadmin from shooting themselves in the foot in terms of performance
tuning.
Cheers,
-PJ
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists