[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1364654567.5113.85.camel@edumazet-glaptop>
Date: Sat, 30 Mar 2013 07:42:47 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Markus Trippelsdorf <markus@...ppelsdorf.de>
Cc: Vijay Subramanian <subramanian.vijay@...il.com>,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH net] net: fq_codel: Fix off-by-one error
On Sat, 2013-03-30 at 07:53 +0100, Markus Trippelsdorf wrote:
> On 2013.03.29 at 08:01 -0700, Eric Dumazet wrote:
> >
> > Just curious, did you play changing the default limit (10240 packets) ?
>
> I did some tests on my home router (running OpenWrt trunk) that is rate-
> limited with hfsc to the speed of the cable modem.
>
> My tests seem to indicate that lowering the default limit to 1024
> packets results in much better latency behavior when using bittorrent.
>
> With the default limit (10240 packets) I would get huge ping latencies
> from 600-1200ms when downloading e.g.:
> http://download.opensuse.org/distribution/12.3/iso/openSUSE-12.3-DVD-x86_64.iso.torrent
> with hundreds of peers.
>
> Setting the limit to 1024 did get the latencies back in check (20-30ms
> with occasional spikes of ~100ms).
Hi Markus
I am very bored having to explain {fq_}codel principles each time
someone does this kind of experiments.
Codel principle is _allowing_ bursts, as long as the queue is
controlled. Read Codel paper for details. TCP can be slow to lower the
queues, it takes several RTT. So observing large queues is quite normal
in your case.
Bittorent uses its own rate limiting technique, defeating
current cwnd control done in the TCP stack, because of a very known
problem
( http://www.ietf.org/id/draft-ietf-tcpm-newcwv-00.txt )
So if your goal is reducing latencies for a _given_ class of flows, just
use prio + 3 fq_codel, and classify your packets to make sure your
lovely ping packets are not dropped or behind long packets.
fq_codel by itself is not universal.
My question about fq_codel limit was related to something completely
different.
The default is 10240 packets. The logic behind is to control the queue
at dequeue, not enqueue. But we needed an safety limit to avoid OOM in
case rate of enqueue() is insane.
It could theoretically hurt a low end machine, in case a burst fills the
queue with big GSO packets. But then these low end machines should not
use GRO/TSO anyway (as this is against anti bufferbloat techniques)
Probably a better choice would have been to limit sum(skb->truesize), or
sum(skb->len) (aka current sch->qstats.backlog)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists