[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b30d1c3b0911042308n616bc360v7b96b6543029f232@mail.gmail.com>
Date: Thu, 5 Nov 2009 16:08:07 +0900
From: Ryousei Takano <ryousei@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Stephen Hemminger <shemminger@...tta.com>,
Patrick McHardy <kaber@...sh.net>,
Linux Netdev List <netdev@...r.kernel.org>,
takano-ryousei@...t.go.jp
Subject: Re: HTB accuracy on 10GbE
Hi Eric,
On Thu, Nov 5, 2009 at 2:03 AM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Ryousei Takano a écrit :
>> Hi Eric,
>>
>> Thanks for your suggestion.
>>
>> On Wed, Nov 4, 2009 at 8:31 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>>> Ryousei Takano a écrit :
>>>
>>>> I tried iperf with 60 seconds samples. I got the almost same result.
>>>>
>>>> Here is the result:
>>>> sender receiver
>>>> 1.000 1.00 1.00
>>>> 2.000 2.01 2.01
>>>> 3.000 3.03 3.02
>>>> 4.000 4.07 4.07
>>>> 5.000 5.05 5.05
>>>> 6.000 6.16 6.16
>>>> 7.000 7.22 7.22
>>>> 8.000 8.15 8.15
>>>> 9.000 9.23 9.23
>>>> 9.900 9.69 9.69
>>>>
>>> One thing to consider is the estimation error in qdisc_l2t(), rate table has only 256 slots
>>>
>>> static inline u32 qdisc_l2t(struct qdisc_rate_table* rtab, unsigned int pktlen)
>>> {
>>> int slot = pktlen + rtab->rate.cell_align + rtab->rate.overhead;
>>> if (slot < 0)
>>> slot = 0;
>>> slot >>= rtab->rate.cell_log;
>>> if (slot > 255)
>>> return (rtab->data[255]*(slot >> 8) + rtab->data[slot & 0xFF]);
>>> return rtab->data[slot];
>>> }
>>>
>>>
>>> Maybe you can try changing class mtu to 40000 instead of 9000, and quantum to 60000 too
>>>
>>> tc class add dev $DEV parent 1: classid 1:1 htb rate ${rate}mbit mtu 40000 quantum 60000
>>>
>>> (because your tcp stack sends large buffers ( ~ 60000 bytes) as your NIC can offload tcp segmentation)
>>>
>>>
>> You are right!
>> I am using TSO. The myri10ge driver is passing 64KB packets to the NIC.
>> I changed the class mtu parameter to 64000 instead of 9000.
>>
>> Here is the result:
>> 1.000 1.00
>> 2.000 2.01
>> 3.000 2.99
>> 4.000 4.01
>> 5.000 5.01
>> 6.000 6.04
>> 7.000 7.06
>> 8.000 8.09
>> 9.000 9.11
>> 9.900 9.64
>>
>> It's not so bad!
>> For more information, I updated the results on my page.
>>
>
>
> In fact, I gave you 40000 because rtab will contain 256 elements from 0 to 65280
>
> If you use 64000, you lose some precision (for small packets for example)
>
I see.
In my experiment, it is not very big problem. I do not send short packets.
I got the almost same result in the both cases "mtu 64000" and "mtu
40000 quantum 60000".
Anyway, setting larger mtu size than the physical mtu does not quiet make sense.
Best regards,
Ryousei
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists