lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw7+KyX=7DC88hKisgaFtzn9vgLsBfUWOGhSHHPadw97Cg@mail.gmail.com>
Date:	Tue, 29 Nov 2011 17:24:58 +0100
From:	Dave Taht <dave.taht@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	John Fastabend <john.r.fastabend@...el.com>,
	Tom Herbert <therbert@...gle.com>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH v4 0/10] bql: Byte Queue Limits

On Tue, Nov 29, 2011 at 3:57 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le mardi 29 novembre 2011 à 09:51 +0100, Dave Taht a écrit :
>
>> Perhaps I don't understand the gross effects of TSO very well, but if you have
>> 100 streams coming from a server, destined to X different destinations,
>> and you FQ to each on a per packet basis, you end up impacting the downstream
>> receive buffers throughout much less than if you send each stream as a burst.
>
> TSO makes packets larger, to lower cpu use in different layers (netfilter, qdisc, ...).
>
> Imagine you could have MSS=65000 on your ethernet wire.
>
> If you need to send a high prio packet while a prior big one is
> in-flight on a dumb device (a single TX FIFO), there is nothing you
> can do but wait last bit of big packet hit the wire.
>
> Even with one flow you lose. Hundred flows dont matter
> (as long as you have proper classification in Qdisc layer, of course)

People keep talking about 'prioritization' as if it can apply.

It doesn't. Prioritization and classification are nearly hopeless
exercises when you have high rate streams. It worked at low rates for
some traffic, but...

The focus for fixing bufferbloat is

               "better queueing"...

and what that translates out to is some form of

fair queuing

- at the moment I'm enthralled with QFQ btw -

coupled with some form of active queue management that works. (RED
used to work but was rather flawed - it's still better than the
alternative of drop tail)

It doesn't necessarily translate out to more unmanaged dumb queues, it
may translate
out to more *managed* queues.

I wouldn't mind TSO AT ALL if the hardware did some of the above
underneath it. I've heard some rumblings that that might happen.  We
spent all that engineering time making TCP go fast and minimized the
hardware impact of that - why not spend a little more time - in the
next generation of hw/sw - making TCP work *better* on the network?

Cisco did that in the 90s, what's so hard about trying now, in
software and/or hardware?

Now look, this thread has got way off the original topic, which was on
BQL, and BQL mostly rocks. The MIAD (as opposed to AIMD) controller in
it bothers me, but that can be looked at harder later.

and I want to go back to making it work on top of my testbed so I can
finally see models matching reality and vice versa and keep working on
producing demonstrable results

that can help fix some problems in software now for end users,
gateways, routers, servers, and data centers... reducing latencies
from trips around the moon to around your living room...

...and get slammed into hardware someday, if there is ever market
demand for things like interactive video, gaming, or voice
applications that just work.

-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ