[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1366767151.8964.56.camel@edumazet-glaptop>
Date: Tue, 23 Apr 2013 18:32:31 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jamal Hadi Salim <jhs@...atatu.com>
Cc: Stephen Hemminger <stephen@...workplumber.org>,
Willem de Bruijn <willemb@...gle.com>,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH net-next v4] rps: selective flow shedding during softnet
overflow
On Tue, 2013-04-23 at 21:25 -0400, Jamal Hadi Salim wrote:
> On 13-04-23 08:09 PM, Eric Dumazet wrote:
>
> > qdisc code has a hardcoded dev_hard_start_xmit() call, thats why ifb
> > hack is used. Not mentioning device flow control.
> >
> > It might be possible to use a q->xmit() method instead, so that it can
> > be used on ingress without ifb.
> >
>
> If i understood correctly what you are trying to achieve:
> I dont think one qdisc per rx queue/ring will work well in
> presence of qdisc since the qdisc is attached per netdev.
MQ permits to have one qdisc per TX queue.
It would be the same concept in ingress.
> i.e when packets are fanned out across cpu backlogs, as long
> as they came in via same netdev queue, they are going to share
> the same lock with all other cpus such packets have been fanned out to
> the moment you attach an ingress qdisc to that netdev ring/queue.
>
Not sure what you mean. The qdisc stuff would replace the 'cpu backlog',
not be added to it. Think of having possibility to control backlog using
standard qdiscs, like fq_codel ;)
> One unorthodox approach is to have a qdisc per backlog queue
> since the backlog is per cpu; given it is abstracted as a netdev,
> it becomes a natural fit (sans the fact backlog queue is
> unidirectional).
Yes, but the per cpu backlog is shared for all devices. We probably want
different qdisc for gre tunnel, eth0, ...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists