lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAN==1Ro3Va7JHfOoENu=gJqfaktrqLvpo3NWV8=ZB9OKtyfeYQ@mail.gmail.com>
Date:	Wed, 2 Nov 2011 10:36:31 +0100
From:	Karel Rericha <karel@...tel.cz>
To:	Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: Quick Fair Queue scheduler maturity and examples

2011/10/27 Eric Dumazet <eric.dumazet@...il.com>:
> Le jeudi 27 octobre 2011 à 18:08 +0200, Eric Dumazet a écrit :
>> Le jeudi 27 octobre 2011 à 14:46 +0200, Karel Rericha a écrit :
>>
>> > Actually I am doing some reseach to replace our main shaping machine
>> > with 60 000+ htb classes, which now saturates 12 core Xeon Westmere to
>> > 30% (there are five gigabit network ports on each interface affinited
>> > to cores). AFAIK QFQ should be O(1) complexity so it would bring
>> > saturation a requirements for number of cores down considerably (HTB
>> > has O(log(N)) complexity).
>> >
>> > I have test machine and about two months to decide if we will stay
>> > with HTB or we will try something else. So it would be VERY helpful,
>> > if you would search you memory instead your dead disk :-) and send me
>> > some example of QFQ usage, if I can ask for a little of your time. I
>> > promise to have results published here in return.
>> >
>> > Thanks, Karel
>> >
>>
>> That seems a good challenge to me ;)
>>
>> First upgrade to a recent kernel with QFQ included.
>> Also upgrade iproute2 to a recent enough version as well.
>>
>> Then you discover "tc  ... qfq help" is not that helpful :(
>>
>> # tc qdisc add dev eth3 root qfq help
>> Usage: ... qfq
>>
>> OK, its parameters are :
>>
>>       qfq weight num1 [maxpkt BYTES]
>>
>> You should not touch maxpkt, its default value being 2048
>>
>> Oh well, I just tried the obvious and my (remote) machine doesnt answer
>> to me anymore...
>>
>> Time for a bit of debugging I am afraid :(
>
> Never mind, it was an user error :)
>
> Here is what I used during my tests, I guess you can adapt your
> scripts...
>
> DEV=eth3
> RATE="rate 40Mbit"
> TNETS="10.2.2.0/25"
> ALLOT="allot 20000"
>
> tc qdisc del dev dummy0 root 2>/dev/null
>
> tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 rate 1000Mbit \
>        bandwidth 1000Mbit
> tc class add dev $DEV parent 1: classid 1:1 \
>        est 1sec 8sec cbq allot 10000 mpu 64 \
>        rate 1000Mbit prio 1 avpkt 1500 bounded
>
> # output to test nets :  40 Mbit limit
> tc class add dev $DEV parent 1:1 classid 1:11 \
>        est 1sec 8sec cbq $ALLOT mpu 64      \
>        $RATE prio 2 avpkt 1400 bounded
>
> tc qdisc add dev $DEV parent 1:11 handle 11:  \
>        est 1sec 8sec qfq
>
> tc filter add dev $DEV protocol ip parent 11: handle 3 \
>        flow hash keys rxhash divisor 8
>
> for i in `seq 1 8`
> do
>  classid=11:$(printf %x $i)
>  tc class add dev $DEV classid $classid qfq
>  tc qdisc add dev $DEV parent $classid pfifo limit 30
> done
>
> for privnet in $TNETS
> do
>        tc filter add dev $DEV parent 1: protocol ip prio 100 u32 \
>                match ip dst $privnet flowid 1:11
> done
>
> tc filter add dev $DEV parent 1: protocol ip prio 100 u32 \
>        match ip protocol 0 0x00 flowid 1:1
>
> iperf -u -c 10.2.2.1 -P 32 -l 50
>
>
>

Thanks for example Eric.

But it only added more confusion to me now :-) I was under impression
(and read somewhere) that QFQ is non work conserving scheduler so I
can use it more or less like HTB or HFSC to set bandwidth constraints
to flows. But from this example (and from sources/patches/papers I try
not to pretend I fully understand) it looks to me like some multiqueue
scheduler with arbitrary number of queues and ability to arbitrary
assign flows to this queues. So some sort of fair division of
available bandwidth to flows without arbitrary bandwidth caps to these
flows. I really dont see what is non work conserving here :-S

Please save my soul and enlighten me because I am at dead end now :-)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ