[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw4By5d2JRzMCooXM2nV2bdUfqwKCivP1ktb3wGa1wNkww@mail.gmail.com>
Date: Mon, 11 Nov 2013 11:24:37 -0800
From: Dave Taht <dave.taht@...il.com>
To: Ben Greear <greearb@...delatech.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Felix Fietkau <nbd@...nwrt.org>,
Sujith Manoharan <sujith@...jith.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Avery Pennarun <apenwarr@...gle.com>
Subject: Re: TCP performance regression
On Mon, Nov 11, 2013 at 11:11 AM, Ben Greear <greearb@...delatech.com> wrote:
> On 11/11/2013 10:31 AM, Dave Taht wrote:
>> Ah, this thread started with a huge regression in ath10k performance
>> with the new TSQ stuff, and isn't actually about a two line fix to the
>> mv ethernet driver.
>>
>> http://comments.gmane.org/gmane.linux.network/290269
>>
>> I suddenly care a lot more. And I'll care a lot, lot, lot more, if
>> someone can post a rrul test for before and after the new fq scheduler
>> and tsq change on this driver on this hardware... What, if anything,
>> in terms of improvements or regressions, happened to multi-stream
>> throughput and latency?
>>
>> https://github.com/tohojo/netperf-wrapper
>
> Not directly related, but we have run some automated tests against
> an older buffer-bloat enabled AP (not ath10k hardware, don't know the
> exact details at the moment), and in general the performance
> is horrible compared to all of the other APs we test against.
I was not happy with the dlink product and the streamboost
implementation, if that is what it was.
> Our tests are concerned mostly with throughput.
:(
> For reference, here are some graphs with supplicant/hostapd
> running on higher-end x86-64 hardware and ath9k:
>
> http://www.candelatech.com/lf_wifi_examples.php
>
> We see somewhat similar results with most commercial APs, though
> often they max out at 128 or fewer stations instead of the several
> hundred we get on our own AP configs.
>
> We'll update to more recent buffer-bloat AP software and post some
> results when we get a chance.
Are you talking cerowrt (on the wndr3800) here? I am well aware that
it doesn't presently scale well with large numbers of clients, which
is awaiting the per-sta queue work. (most of the work to date has been
on the aqm-to-the-universe code)
This is the most recent stable firmware for that:
http://snapon.lab.bufferbloat.net/~cero2/cerowrt/wndr/3.10.17-6/
I just did 3.10.18 but haven't tested it.
Cero also runs HT20 by default, and there are numerous other things
that are configured more for "science" than throughput. Notably the
size of the aggregation queues is limited. But I'd LOVE a test through
your suite.
I note I'd also love to see TCP tests through your suite with the AP
configured thusly
(server) - (100ms delay box running a recent netem and a packet limit
of 100000+) - AP (w 1000 packets buffering/wo AQM, and with AQM) -
(wifi clients)
(and will gladly help set that up. Darn, I just drove past your offices)
>
> Thanks,
> Ben
>
>
> --
> Ben Greear <greearb@...delatech.com>
> Candela Technologies Inc http://www.candelatech.com
>
--
Dave Täht
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists