[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <345bf201-f7cf-c821-1dba-50d0f2b76101@alibaba-inc.com>
Date: Fri, 10 Jul 2020 02:20:00 +0800
From: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] net: sched: Lockless Token Bucket (LTB)
qdisc
On 7/9/20 10:15 AM, Eric Dumazet wrote:
>
>
> On 7/9/20 10:04 AM, YU, Xiangning wrote:
>>
>>
>> On 7/8/20 6:24 PM, Eric Dumazet wrote:
>>>
>>>
>>> On 7/8/20 5:58 PM, YU, Xiangning wrote:
>>>>
>>>>
>>>> On 7/8/20 5:08 PM, Eric Dumazet wrote:
>>>>>
>>>>>
>>>>> On 7/8/20 4:59 PM, YU, Xiangning wrote:
>>>>>
>>>>>>
>>>>>> Yes, we are touching a cache line here to make sure aggregation tasklet is scheduled immediately. In most cases it is a call to test_and_set_bit().
>>>>>
>>>>>
>>>>> test_and_set_bit() is dirtying the cache line even if the bit is already set.
>>>>>
>>>>
>>>> Yes. I do hope we can avoid this.
>>>>
>>>>>>
>>>>>> We might be able to do some inline processing without tasklet here, still we need to make sure the aggregation won't run simultaneously on multiple CPUs.
>>>>>
>>>>> I am actually surprised you can reach 8 Mpps with so many cache line bouncing around.
>>>>>
>>>>> If you replace the ltb qdisc with standard mq+pfifo_fast, what kind of throughput do you get ?
>>>>>
>>>>
>>>> Just tried it using pktgen, we are far from baseline. I can get 13Mpps with 10 threads in my test setup.
>>>
>>> This is quite low performance.
>>>
>>> I suspect your 10 threads are sharing a smaller number of TX queues perhaps ?
>>>
>>
>> Thank you for the hint. Looks like pktgen only used the first 10 queues.
>>
>> I fined tuned ltb to reach 10M pps with 10 threads last night. I can further push the limit. But we probably won't be able to get close to baseline. Rate limiting really brings a lot of headache, at least we are not burning CPUs to get this result.
>
> Well, at Google we no longer have this issue.
>
> We adopted EDT model, so that rate limiting can be done in eBPF, by simply adjusting skb->tstamp.
>
> The qdisc is MQ + FQ.
>
> Stanislas Fomichev will present this use case at netdev conference
>
> https://netdevconf.info/0x14/session.html?talk-replacing-HTB-with-EDT-and-BPF
>
This is cool, I would love to learn more about this!
Still please correct me if I'm wrong. This looks more like pacing on a per-flow basis, how do you support an overall rate limiting of multiple flows? Each individual flow won't have a global rate usage about others.
Thanks,
- Xiangning
Powered by blists - more mailing lists