[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B0C2547.8030408@gmail.com>
Date: Tue, 24 Nov 2009 19:26:15 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@...el.com>
CC: David Miller <davem@...emloft.net>,
"peterz@...radead.org" <peterz@...radead.org>,
"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
"yong.zhang0@...il.com" <yong.zhang0@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"arjan@...ux.jf.intel.com" <arjan@...ux.jf.intel.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH] irq: Add node_affinity CPU masks for smarter irqbalance
hints
Peter P Waskiewicz Jr a écrit :
That's the kind of thing PJ is trying to make available.
>
> Yes, that's exactly what I'm trying to do. Even further, we want to
> allocate the ring SW struct itself and descriptor structures on other
> NUMA nodes, and make sure the interrupt lines up with those allocations.
>
Say you allocate ring buffers on NUMA node of the CPU handling interrupt
on a particular queue.
If irqbalance or an admin changes /proc/irq/{number}/smp_affinities,
do you want to realloc ring buffer to another NUMA node ?
It seems complex to me, maybe optimal thing would be to use a NUMA policy to
spread vmalloc() allocations to all nodes to get a good bandwidth...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists