[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090729104726.GA17410@hmsreliant.think-freely.org>
Date: Wed, 29 Jul 2009 06:47:26 -0400
From: Neil Horman <nhorman@...driver.com>
To: Brice Goglin <Brice.Goglin@...ia.fr>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [RFC] Idea about increasing efficency of skb allocation in
network devices
On Wed, Jul 29, 2009 at 10:20:55AM +0200, Brice Goglin wrote:
> Neil Horman wrote:
> >>> Is there an easy way to get this NUMA node from the application socket
> >>> descriptor?
> >>>
> >> Thats not easy, this information can change for every packet (think of
> >> bonding setups, whith aggregation of devices on different NUMA nodes)
> >>
> >> We could add a getsockopt() call to peek this information from the next
> >> data to be read from socket (returns node id where skb data is sitting,
> >> hoping that NIC driver hadnt copybreak it (ie : allocate a small skb and
> >> copy the device provided data on it before feeding packet to network stack))
> >>
> >>
> > Would a proc or debugfs interface perhaps be helpful here? Something that
> > perhaps showed a statistical distribution of how many packets were received by
> > each process on each irq (operating under the assumption that each rx queue has
> > its own msi irq, giving us an easy identifier).
> >
>
> It could be intereting. But unprivileged user processes cannot read
> /proc/irq/*/smp_affinity, so they would not be able to translate your
> procfs information into a binding hint.
>
I don't think you'd need read access to the irq affinity files. If the above
debugfs/proc information were exported to indicate which numa node or cpu the
allocated skb were local to, that could be used by the process to set its
scheduler affintiy via taskset.
Neil
> Brice
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists