[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADVatmPQRf9A9z1LbHe5cd+bFLrPGG12YxPh2-yXAj_C9s8ZeA@mail.gmail.com>
Date: Thu, 18 Jul 2019 13:55:38 +0100
From: Sudip Mukherjee <sudipm.mukherjee@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"David S. Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression with napi/softirq ?
On Thu, Jul 18, 2019 at 12:42 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
>
> On 7/18/19 1:18 PM, Sudip Mukherjee wrote:
> > Hi Eric,
> >
> > On Thu, Jul 18, 2019 at 7:58 AM Eric Dumazet <eric.dumazet@...il.com> wrote:
> >>
> >>
> >>
> >> On 7/17/19 11:52 PM, Thomas Gleixner wrote:
> >>> Sudip,
> >>>
> >>> On Wed, 17 Jul 2019, Sudip Mukherjee wrote:
> >>>> On Wed, Jul 17, 2019 at 9:53 PM Thomas Gleixner <tglx@...utronix.de> wrote:
> >>>>> You can hack ksoftirq_running() to return always false to avoid this, but
> >>>>> that might cause application starvation and a huge packet buffer backlog
> >>>>> when the amount of incoming packets makes the CPU do nothing else than
> >>>>> softirq processing.
> >>>>
> >>>> I tried that now, it is better but still not as good as v3.8
> >>>> Now I am getting 375.9usec as the maximum time between raising the softirq
> >>>> and it starting to execute and packet drops still there.
> >>>>
> >>>> And just a thought, do you think there should be a CONFIG_ option for
> >>>> this feature of ksoftirqd_running() so that it can be disabled if needed
> >>>> by users like us?
> >>>
> >>> If at all then a sysctl to allow runtime control.
> >>>
> >
> > <snip>
> >
> >>
> >> ksoftirqd might be spuriously scheduled from tx path, when
> >> __qdisc_run() also reacts to need_resched().
> >>
> >> By raising NET_TX while we are processing NET_RX (say we send a TCP ACK packet
> >> in response to incoming packet), we force __do_softirq() to perform
> >> another loop, but before doing an other round, it will also check need_resched()
> >> and eventually call wakeup_softirqd()
> >>
> >> I wonder if following patch makes any difference.
> >>
> >> diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
> >> index 11c03cf4aa74b44663c74e0e3284140b0c75d9c4..ab736e974396394ae6ba409868aaea56a50ad57b 100644
> >> --- a/net/sched/sch_generic.c
> >> +++ b/net/sched/sch_generic.c
> >> @@ -377,6 +377,8 @@ void __qdisc_run(struct Qdisc *q)
> >> int packets;
> >>
> >> while (qdisc_restart(q, &packets)) {
> >> + if (qdisc_is_empty(q))
> >> + break;
> >
> > unfortunately its v4.14.55 and qdisc_is_empty() is not yet introduced.
> > And I can not backport 28cff537ef2e ("net: sched: add empty status
> > flag for NOLOCK qdisc")
> > also as TCQ_F_NOLOCK is there. :(
> >
>
> On old kernels, you can simply use
>
> static inline bool qdisc_is_empty(struct Qdisc *q)
> {
> return !qdisc_qlen(q);
> }
>
Thanks Eric. But there is no improvement in delay between
softirq_raise and softirq_entry with this change.
But moving to a later kernel (linus master branch? ) like Thomas has
said in the other mail might be difficult atm. I can definitely
move to v4.14.133 if that helps. Thomas ?
--
Regards
Sudip
Powered by blists - more mailing lists