[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <419dbdae-19f9-2bb3-2ca5-eaffd58f1266@alibaba-inc.com>
Date: Fri, 10 Jul 2020 09:42:57 +0800
From: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] net: sched: Lockless Token Bucket (LTB)
qdisc
On 7/9/20 3:22 PM, Eric Dumazet wrote:
>
>
> On 7/9/20 11:20 AM, YU, Xiangning wrote:
>>
>>
>> On 7/9/20 10:15 AM, Eric Dumazet wrote:
>>>
>>> Well, at Google we no longer have this issue.
>>>
>>> We adopted EDT model, so that rate limiting can be done in eBPF, by simply adjusting skb->tstamp.
>>>
>>> The qdisc is MQ + FQ.
>>>
>>> Stanislas Fomichev will present this use case at netdev conference
>>>
>>> https://netdevconf.info/0x14/session.html?talk-replacing-HTB-with-EDT-and-BPF
>>>
>> This is cool, I would love to learn more about this!
>>
>> Still please correct me if I'm wrong. This looks more like pacing on a per-flow basis, how do you support an overall rate limiting of multiple flows? Each individual flow won't have a global rate usage about others.
>>
>
>
> No, this is really per-aggregate rate limiting, multiple TCP/UDP flows can share the same class.
>
> Before that, we would have between 10 and 3000 HTB classes on a host.
> We had internal code to bypass the HTB (on bond0 device) for non throttled packets,
> since HTB could hardly cope with more than 1Mpps.
>
> Now, an eBPF program (from sch_handle_egress()) using maps to perform classification
> and (optional) rate-limiting based on various rules.
>
> MQ+FQ is already doing the per-flow pacing (we have been using this for 8 years now)
>
> The added eBPF code extended this pacing to be per aggregate as well.
>
That's very interesting! Thank you for sharing.
We have been deploying ltb for several years too. It's far better than htb but still have degradation compared with the baseline. Usng EDT across flows should be able to yield an even better result.
Thanks
- Xiangning
Powered by blists - more mailing lists