[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FD8F551.4020708@hiramoto.org>
Date: Wed, 13 Jun 2012 22:17:21 +0200
From: Karl Hiramoto <karl@...amoto.org>
To: David Woodhouse <dwmw2@...radead.org>
CC: Nathan Williams <nathan@...verse.com.au>,
"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: PPPoE performance regression
On 06/10/12 10:32, David Woodhouse wrote:
> On Sun, 2012-06-10 at 10:50 +1000, Nathan Williams wrote:
>>> When using iperf with UDP, we can get 20Mbps downstream, but only about
>>> 15Mbps throughput when using TCP on a short ADSL line (line sync at
>>> 25Mbps). Using iperf to send UDP traffic upstream at the same time
>>> doesn't affect the downstream rate.
>> ...
>>
>> I found the change responsible for the performance problem and rebuilt
>> OpenWrt with the patch reversed on kernel 3.3.8 to confirm everything
>> still works. So the TX buffer is getting full, which causes the netif
>> queue to be stopped and restarted after some skbs have been freed?
> The *Ethernet* netif queue, yes. But not the PPP netif queue, I believe.
> I think the PPP code keeps just blindly calling dev_queue_xmit() and
> throwing away packets when they're not accepted.
>
>> commit 137742cf9738f1b4784058ff79aec7ca85e769d4
>> Author: Karl Hiramoto <karl@...amoto.org>
>> Date: Wed Sep 2 23:26:39 2009 -0700
>>
>> atm/br2684: netif_stop_queue() when atm device busy and
>> netif_wake_queue() when we can send packets again.
> Nice work; well done finding that. I've added Karl and DaveM, and the
> netdev@ list to Cc.
>
> (Btw, I assume the performance problem also goes away if you use PPPoA?
> I've made changes in the PPPoA code recently to *eliminate* excessive
> calls to netif_wake_queue(), and also to stop it from filling the ATM
> device queue. That was commit 9d02daf7 in 3.5-rc1, which is already in
> OpenWRT.)
>
> I was already looking vaguely at how we could limit the PPP queue depth
> for PPPoE and implement byte queue limits. Currently the PPP code just
> throws the packets at the Ethernet device and considers them 'gone',
> which is why it's hitting the ATM limits all the time. The patch you
> highlight is changing the behaviour in a case that should never *happen*
> with PPP. It's suffering massive queue bloat if it's filling the ATM
> queue, and we should fix *that*.
Agreed, the issue is the PPP layer. I've seen this issue with PPPoE
before, haven't had the itch, time or interest to fix it though. A
workaround to help mitigate the issue may be to increase the TX queue
length of the br2684 interface, and the atm device if possible.
You'll pay the price of buffer bloat and latency.
--
Karl
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists