[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B0C2CCA.6030006@gmail.com>
Date: Tue, 24 Nov 2009 19:58:18 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Miller <davem@...emloft.net>
CC: peter.p.waskiewicz.jr@...el.com, peterz@...radead.org,
arjan@...ux.intel.com, yong.zhang0@...il.com,
linux-kernel@...r.kernel.org, arjan@...ux.jf.intel.com,
netdev@...r.kernel.org
Subject: Re: [PATCH] irq: Add node_affinity CPU masks for smarter irqbalance
hints
David Miller a écrit :
> From: Eric Dumazet <eric.dumazet@...il.com>
> Date: Tue, 24 Nov 2009 19:26:15 +0100
>
>> It seems complex to me, maybe optimal thing would be to use a NUMA policy to
>> spread vmalloc() allocations to all nodes to get a good bandwidth...
>
> vmalloc() and sk_buff's don't currently mix and I really don't see us
> every allowing them to :-)
I think Peter was referring to tx/rx rings buffers, not sk_buffs.
They (ring buffers) are allocated with vmalloc() at driver init time.
And Tom pointed out that our rx sk_buff allocation should be using the node
of requester, no need to hardcode node number per rx queue (or per device as of today)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists