[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A6D64D9.6010601@inria.fr>
Date: Mon, 27 Jul 2009 10:27:05 +0200
From: Brice Goglin <Brice.Goglin@...ia.fr>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: David Miller <davem@...emloft.net>, nhorman@...driver.com,
netdev@...r.kernel.org
Subject: Re: [RFC] Idea about increasing efficency of skb allocation in network
devices
Eric Dumazet wrote:
>> Is there an easy way to get this NUMA node from the application socket
>> descriptor?
>>
>
> Thats not easy, this information can change for every packet (think of
> bonding setups, whith aggregation of devices on different NUMA nodes)
>
If we return a mask of cpus near the NIC, we could return the mask
containing cpus that are close to any of the devices that were
aggregated in this bonding setup.
If no bonding, it's fine. If bonding, the behavior looks acceptable to me.
> We could add a getsockopt() call to peek this information from the next
> data to be read from socket (returns node id where skb data is sitting,
> hoping that NIC driver hadnt copybreak it (ie : allocate a small skb and
> copy the device provided data on it before feeding packet to network stack))
>
>
>
>> Also, one question that was raised at the Linux Symposium is: how do you
>> know which processors run the receive queue for a specific connection ?
>> It would be nice to have a way to retrieve such information in the
>> application to avoid inter-node and inter-core/cache traffic.
>>
>
> All this depends on the fact you have multiqueue devices or not, and
> trafic spreads on all queues or not.
>
Again, on a per-connection basis, you should know whether your packets
are going through a single queue or to all of them? If going to a single
queue, return a mask of cpus near this exact queue. If going to multiple
queues (or if you don't know), just sumup the cpumask of all queues.
Brice
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists