[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51773905.9030005@mojatatu.com>
Date: Tue, 23 Apr 2013 21:44:37 -0400
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: Stephen Hemminger <stephen@...workplumber.org>,
Willem de Bruijn <willemb@...gle.com>,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH net-next v4] rps: selective flow shedding during softnet
overflow
On 13-04-23 09:32 PM, Eric Dumazet wrote:
> On Tue, 2013-04-23 at 21:25 -0400, Jamal Hadi Salim wrote:
> Not sure what you mean. The qdisc stuff would replace the 'cpu backlog',
Aha ;->
So you would have many little backlogs one per ring per cpu, correct?
> not be added to it. Think of having possibility to control backlog using
> standard qdiscs, like fq_codel ;)
Excellent. So this is not as a big surgery as it sounds then.
the backloglets just need to be exposed as netdevs.
> Yes, but the per cpu backlog is shared for all devices. We probably want
> different qdisc for gre tunnel, eth0, ...
Makes sense.
BTW, looking at __skb_get_rxhash(), if i had a driver that sets either
skb->rxhash (picks it off the dma descriptor), could i not use that
instead of computing the hash? something like attached patch.
cheers,
jamal
View attachment "p1" of type "text/plain" (412 bytes)
Powered by blists - more mailing lists