[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0d8c5628-d5de-5eb7-9822-a63444226554@alibaba-inc.com>
Date: Thu, 09 Jul 2020 04:48:04 +0800
From: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
To: Eric Dumazet <eric.dumazet@...il.com>,
David Miller <davem@...emloft.net>
Cc: netdev@...r.kernel.org
Subject: Re: [PATCH net-next v2 1/2] irq_work: Export symbol
"irq_work_queue_on"
On 7/8/20 1:27 PM, Eric Dumazet wrote:
>
>
> On 7/8/20 12:37 PM, David Miller wrote:
>> From: "YU, Xiangning" <xiangning.yu@...baba-inc.com>
>> Date: Thu, 09 Jul 2020 00:38:16 +0800
>>
>>> @@ -111,7 +111,7 @@ bool irq_work_queue_on(struct irq_work *work, int cpu)
>>> return true;
>>> #endif /* CONFIG_SMP */
>>> }
>>> -
>>> +EXPORT_SYMBOL_GPL(irq_work_queue_on);
>>
>> You either removed the need for kthreads or you didn't.
>>
>> If you are queueing IRQ work like this, you're still using kthreads.
>>
>> That's why Eric is asking why you still need this export.
>>
>
> I received my copy of the 2/2 patch very late, I probably misunderstood
> the v2 changes.
>
> It seems irq_work_queue_on() is till heavily used, and this makes me nervous.
>
> Has this thing being tested on 256 cores platform ?
>
Yes the IRQ is used here for inter-CPU notification after we do rate limiting. We have done extensively testing and deployment with 64 cores(at least) machines. Will see if I can get a 256 cores machine to get more results.
For rate limiting, we have this dilemma that token bucket needs to run centralized to be accurate. While packets are from all CPUS. We need a low latency mechanism to effectively notify other CPUs. It would be great if you guys can shed some light on how we can better solve this problem.
Thanks,
- Xiangning
Powered by blists - more mailing lists