[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <769acdef-72c1-b4ed-4699-9423ce59db67@alibaba-inc.com>
Date: Thu, 09 Jul 2020 05:38:03 +0800
From: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] net: sched: Lockless Token Bucket (LTB)
qdisc
On 7/8/20 2:14 PM, Eric Dumazet wrote:
>
>
> On 7/8/20 9:38 AM, YU, Xiangning wrote:
>> Lockless Token Bucket (LTB) is a qdisc implementation that controls the
>> use of outbound bandwidth on a shared link. With the help of lockless
>> qdisc, and by decoupling rate limiting and bandwidth sharing, LTB is
>> designed to scale in the cloud data centers.
>>
>
> Before reviewing this patch (with many outcomes at first glance),
> we need experimental data, eg how this is expected to work on a
> typical host with 100Gbit NIC (multi queue), 64 cores at least,
> and what is the performance we can get from it (Number of skbs per second,
> on a class limited to 99Gbit)
>
> Four lines of changelog seems terse to me.
>
This is what I sent out in my first email. So far I don't see any problem with 2*25G bonding on 64 cores. Let me see if I can find a 100G, please stay tuned.
"""
Here’s some quick results we get with pktgen over a 10Gbps link.
./samples/pktgen/pktgen_bench_xmit_mode_queue_xmit.sh –i eth0 -t $NUM
We ran it four times and calculated the sum of the results. We did this for
5, 10, 20, and 30 threads with both HTB and LTB. We have seen significant
performance gain. And we believe there are still rooms for further
improvement.
HTB:
5: 1365793 1367419 1367896 1365359
10: 1130063 1131307 1130035 1130385
20: 629792 629517 629219 629234
30: 582358 582537 582707 582716
LTB:
5: 3738416 3745033 3743431 3744847
10: 8327665 8327129 8320331 8322122
20: 6972309 6976670 6975789 6967784
30: 7742397 7742951 7738911 7742812
"""
Thanks,
- Xiangning
Powered by blists - more mailing lists