lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 3 Jan 2012 14:00:16 +0100
From:	Dave Taht <dave.taht@...il.com>
To:	"John A. Sullivan III" <jsullivan@...nsourcedevel.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Michal Kubeček <mkubecek@...e.cz>,
	netdev@...r.kernel.org
Subject: Re: tc filter mask for ACK packets off?

On Tue, Jan 3, 2012 at 1:45 PM, John A. Sullivan III
<jsullivan@...nsourcedevel.com> wrote:
> On Tue, 2012-01-03 at 13:32 +0100, Eric Dumazet wrote:
>> Le mardi 03 janvier 2012 à 07:18 -0500, John A. Sullivan III a écrit :
>> > On Tue, 2012-01-03 at 10:36 +0100, Dave Taht wrote:
>> > <snip>
>> > > I'd go into more detail, but after what I hope are the final two
>> > > fixes to sfq and qfq land in the net-next kernel (after some more
>> > > testing), I like to think I have a more valid approach than this
>> > > in the works, but that too will require some more development
>> > > and testing.
>> > >
>> > > http://www.teklibre.com/~d/bloat/pfifo_fast_vs_sfq_qfq_linear.png
>> > >
>> > <snip>
>> > Hmmm . . . certainly shattered my concerns about replacing pfifo_fast
>> > with SFQ! Thanks - John

SFQ as presently implemented (and by presently, I mean, as of yesterday,
by tomorrow it could be different at the rate eric is going!) is VERY
suitable for
sub 100Mbit desktops, wireless stations/laptops other devices,
home gateways with sub 100Mbit uplinks, and the like. That's a few
hundred million devices that aren't using it today and defaulting to
pfifo_fast and suffering for it.

QFQ is it's big brother and I have hopes it can scale up to 10GigE,
once suitable techniques are found for managing the sub-queue depth.

The enhancements to SFQ eric proposed in the other thread might get it
to where it outperforms (by a lot) pfifo_fast in it's default configuration
(eg txqueuelen 1000) with few side effects. Scaling further up than that...

... I don't have a good picture of gigE performance at the moment with
any of these advanced qdiscs and have no recomendation.

I do recomend highly that you fiddle with this stuff! I do have to
note that the graph above had GSO/TSO turned off.

>> Before you do, take the time to read the warning in sfq source :
>>
>>
>>       ADVANTAGE:
>>
>>       - It is very cheap. Both CPU and memory requirements are minimal.
>>
>>       DRAWBACKS:
>>
>>       - "Stochastic" -> It is not 100% fair.
>>       When hash collisions occur, several flows are considered as one.

This is in part the benefit of SFQ vs QFQ in that the maximum queue
depth is well managed.

>>       - "Round-robin" -> It introduces larger delays than virtual clock
>>       based schemes, and should not be used for isolating interactive
>>       traffic from non-interactive. It means, that this scheduler
>>       should be used as leaf of CBQ or P3, which put interactive traffic
>>       to higher priority band.

These delays are NOTHING compared to what pfifo_fast can induce.

Very little traffic nowadays is marked as interactive to any statistically
significant extent, so any FQ method effectively makes more traffic
interactive than prioritization can.

>> SFQ (as a direct replacement of dev root qdisc) is fine if most of your trafic
>> is of same kind/priority.

Which is the case for most desktops, laptops, gws, wireless, etc.

> Yes, I suppose I should have been more specific, replacing pfifo_fast
> when I am using something else to prioritize and shape my traffic like
> HFSC.

I enjoyed getting your HFSC experience secondhand. It would be
very interesting getting your feedback on trying this stuff.

More data is needed to beat the bloat.

> Hmm . . . although I still wonder about iSCSI SANs . . .   Thanks

I wonder too. Most of the people running iSCSI seem to have an
aversion to packet loss, yet are running over TCP. I *think*
FQ methods will improve latency dramatically for iSCSI
when iSCSI has multiple initiators....


> - John
>



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ