[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8124bbe5-eaa8-2106-2695-4788ec0f6544@gmail.com>
Date: Thu, 18 Jul 2019 13:42:17 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Sudip Mukherjee <sudipm.mukherjee@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"David S. Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression with napi/softirq ?
On 7/18/19 1:18 PM, Sudip Mukherjee wrote:
> Hi Eric,
>
> On Thu, Jul 18, 2019 at 7:58 AM Eric Dumazet <eric.dumazet@...il.com> wrote:
>>
>>
>>
>> On 7/17/19 11:52 PM, Thomas Gleixner wrote:
>>> Sudip,
>>>
>>> On Wed, 17 Jul 2019, Sudip Mukherjee wrote:
>>>> On Wed, Jul 17, 2019 at 9:53 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>>>>> You can hack ksoftirq_running() to return always false to avoid this, but
>>>>> that might cause application starvation and a huge packet buffer backlog
>>>>> when the amount of incoming packets makes the CPU do nothing else than
>>>>> softirq processing.
>>>>
>>>> I tried that now, it is better but still not as good as v3.8
>>>> Now I am getting 375.9usec as the maximum time between raising the softirq
>>>> and it starting to execute and packet drops still there.
>>>>
>>>> And just a thought, do you think there should be a CONFIG_ option for
>>>> this feature of ksoftirqd_running() so that it can be disabled if needed
>>>> by users like us?
>>>
>>> If at all then a sysctl to allow runtime control.
>>>
>
> <snip>
>
>>
>> ksoftirqd might be spuriously scheduled from tx path, when
>> __qdisc_run() also reacts to need_resched().
>>
>> By raising NET_TX while we are processing NET_RX (say we send a TCP ACK packet
>> in response to incoming packet), we force __do_softirq() to perform
>> another loop, but before doing an other round, it will also check need_resched()
>> and eventually call wakeup_softirqd()
>>
>> I wonder if following patch makes any difference.
>>
>> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
>> index 11c03cf4aa74b44663c74e0e3284140b0c75d9c4..ab736e974396394ae6ba409868aaea56a50ad57b 100644
>> --- a/net/sched/sch_generic.c
>> +++ b/net/sched/sch_generic.c
>> @@ -377,6 +377,8 @@ void __qdisc_run(struct Qdisc *q)
>> int packets;
>>
>> while (qdisc_restart(q, &packets)) {
>> + if (qdisc_is_empty(q))
>> + break;
>
> unfortunately its v4.14.55 and qdisc_is_empty() is not yet introduced.
> And I can not backport 28cff537ef2e ("net: sched: add empty status
> flag for NOLOCK qdisc")
> also as TCQ_F_NOLOCK is there. :(
>
On old kernels, you can simply use
static inline bool qdisc_is_empty(struct Qdisc *q)
{
return !qdisc_qlen(q);
}
Powered by blists - more mailing lists