lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5UG4=QRN3Wnh82wRg8YCSV7vDqGp0HyeVxsihUwLuioQ@mail.gmail.com>
Date:	Tue, 29 Nov 2011 17:06:05 +0100
From:	Dave Taht <dave.taht@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Ben Hutchings <bhutchings@...arflare.com>,
	Tom Herbert <therbert@...gle.com>, davem@...emloft.net,
	netdev@...r.kernel.org
Subject: Re: [PATCH v4 0/10] bql: Byte Queue Limits

On Tue, Nov 29, 2011 at 3:29 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mardi 29 novembre 2011 à 14:24 +0000, Ben Hutchings a écrit :
>
>> Not if you separate hardware queues by priority (and your high priority
>> packets are non-TCP or PuSHed).
>
> I mostly have tg3 , bnx2 cards, mono queues...
>
> I presume Dave, working on small Wifi/ADSL routers have same kind of
> hardware.

Nothing but mono queues here on wired - 4 queues on wireless, however.

and a focus on trying to make sure the
10Gig guys don't swamp the 128Kbit to 100Mbit guys, and everything in
between that bandwidth range is what I care about, mostly against GigE
servers...

( I'm still waiting on some 10Gig hw donations to arrive)

However the hardware array is much larger than you presume.

We have a variety of hardware, ranging from 7 cerowrt routers located
in the bloatlab #1 at ISC.org,  where there are also a couple x86_64
based multicore servers, and a variety of related (mostly wireless)
hardware, such as a bunch of OLPCs.

Bloatlab #1 is in California, and connected to the internet via 10gige
and on a dedicated gigE connection all it's own.

http://www.bufferbloat.net/projects/cerowrt/wiki/BloatLab_1

With overly reduced TX rings to combat bufferbloat, the best the
routers in the lab can do is about 290Mbit. They have excellent TCP_RR
stats, though. With larger rings, they do 540Mbit+. It's my hope with
BQL on the router to get closer to the larger figure.

One of the x86 machines in the lab does TSO and it's ugly...

I'm now based in Paris specifically to be testing FQ and AQM solutions
over the 170 ms LFN between here and there and have been working on
QFQ + RED (while awaiting 'RED light' both at 100Mbit line rates and
at software simulated rates below that common to actual end user
connectivity to the internet.

http://www.bufferbloat.net/issues/312

I have 3 additional routers and  several e1000e machines here in Paris.

And I'm checking into the interactions of all this against everything
else against a variety of models. ISC has made the bloatlab is
available to all, I note, if anyone wants to run a test there, let me
know....


>
>
>



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ