[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ea368d0-d12c-2f04-17a7-1e31a61bbe2b@alibaba-inc.com>
Date: Fri, 10 Jul 2020 01:04:34 +0800
From: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next v2 2/2] net: sched: Lockless Token Bucket (LTB)
qdisc
On 7/8/20 6:24 PM, Eric Dumazet wrote:
>
>
> On 7/8/20 5:58 PM, YU, Xiangning wrote:
>>
>>
>> On 7/8/20 5:08 PM, Eric Dumazet wrote:
>>>
>>>
>>> On 7/8/20 4:59 PM, YU, Xiangning wrote:
>>>
>>>>
>>>> Yes, we are touching a cache line here to make sure aggregation tasklet is scheduled immediately. In most cases it is a call to test_and_set_bit().
>>>
>>>
>>> test_and_set_bit() is dirtying the cache line even if the bit is already set.
>>>
>>
>> Yes. I do hope we can avoid this.
>>
>>>>
>>>> We might be able to do some inline processing without tasklet here, still we need to make sure the aggregation won't run simultaneously on multiple CPUs.
>>>
>>> I am actually surprised you can reach 8 Mpps with so many cache line bouncing around.
>>>
>>> If you replace the ltb qdisc with standard mq+pfifo_fast, what kind of throughput do you get ?
>>>
>>
>> Just tried it using pktgen, we are far from baseline. I can get 13Mpps with 10 threads in my test setup.
>
> This is quite low performance.
>
> I suspect your 10 threads are sharing a smaller number of TX queues perhaps ?
>
Thank you for the hint. Looks like pktgen only used the first 10 queues.
I fined tuned ltb to reach 10M pps with 10 threads last night. I can further push the limit. But we probably won't be able to get close to baseline. Rate limiting really brings a lot of headache, at least we are not burning CPUs to get this result.
Thanks,
- Xiangning
Powered by blists - more mailing lists