[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101130.104834.112604433.davem@davemloft.net>
Date: Tue, 30 Nov 2010 10:48:34 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: therbert@...gle.com
Cc: eric.dumazet@...il.com, netdev@...r.kernel.org,
bhutchings@...arflare.com
Subject: Re: [PATCH net-next-2.6] sched: use xps information for qdisc NUMA
affinity
From: Tom Herbert <therbert@...gle.com>
Date: Tue, 30 Nov 2010 10:31:48 -0800
> On Mon, Nov 29, 2010 at 10:14 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>> I was thinking of using XPS tx_queue->cpu mapping to eventually allocate
>> memory with correct NUMA affinities, for qdisc/class stuff for example.
>>
>
> An interesting idea, but the real question is can this be used for all
> queue related allocations. This includes those that drivers allocate
> which are probably done in initialization.
Most drivers do, and all drivers ought to, allocate DMA queues and
whatnot when the interface is brought up.
That solves this particular issue.
For example, drivers/net/niu.c does this by calling
niu_alloc_channels() via niu_open().
The only thing we really can't handle currently is the netdev
itself (and the associated driver private). Jesse Brandeburg
has been reminding me about this over and over :-)
There might be some things we can even do about that part. For
example, we can put all of the things the driver touches in the
RX and TX fast paths via indirect pointers and therefore be able
to allocate and reallocate those portions as we want long after
device registry.
Doing the core netdev struct itself is too hard because it sits
in so many tables.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists