[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1432610246.4060.220.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Mon, 25 May 2015 20:17:26 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: "John A. Sullivan III" <jsullivan@...nsourcedevel.com>
Cc: netdev@...r.kernel.org
Subject: Re: Drops in qdisc on ifb interface
On Mon, 2015-05-25 at 22:52 -0400, John A. Sullivan III wrote:
> Hmm . . . I've been thinking about that SFQ leaf qdisc. I see that
> newer kernels allow a much higher "limit" than 127 but it still seems
> that the queue depth limit for any one flow is still 127. When we do
> something like GRE/IPSec, I think the decrypted GRE traffic will
> distribute across the queues but the IPSec traffic will collapse all the
> packets initially into one queue. At 80ms RTT a 1 Gbps wire speed, I
> would need a queue of around 7500. Thus, can one say that SFQ is almost
> useless for high BDP connections?
I am a bit surprised, as your 'nstat' output showed no packet
retransmits. So no packets were lost in your sfq.
>
> Is there a similar round-robin type qdisc that does not have this
> limitation?
fq_codel limit 10000
>
> If I recall correctly, if one does not attach a qdisc explicitly to a
> class, it defaults to pfifo_fast. Is that correct? Thanks - John
>
That would be pfifo.
pfifo_fast is the default root qdisc ( /proc/sys/net/core/default_qdisc
)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists