[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1907172345360.1778@nanos.tec.linutronix.de>
Date: Wed, 17 Jul 2019 23:52:37 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Sudip Mukherjee <sudipm.mukherjee@...il.com>
cc: "Peter Zijlstra (Intel)" <peterz@...radead.org>,
"David S. Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression with napi/softirq ?
Sudip,
On Wed, 17 Jul 2019, Sudip Mukherjee wrote:
> On Wed, Jul 17, 2019 at 9:53 PM Thomas Gleixner <tglx@...utronix.de> wrote:
> > You can hack ksoftirq_running() to return always false to avoid this, but
> > that might cause application starvation and a huge packet buffer backlog
> > when the amount of incoming packets makes the CPU do nothing else than
> > softirq processing.
>
> I tried that now, it is better but still not as good as v3.8
> Now I am getting 375.9usec as the maximum time between raising the softirq
> and it starting to execute and packet drops still there.
>
> And just a thought, do you think there should be a CONFIG_ option for
> this feature of ksoftirqd_running() so that it can be disabled if needed
> by users like us?
If at all then a sysctl to allow runtime control.
> Can you please think of anything else that might have changed which I still need
> to change to make the time comparable to v3.8..
Something with in that small range of:
63592 files changed, 13783320 insertions(+), 5155492 deletions(-)
:)
Seriously, that can be anything.
Can you please test with Linus' head of tree and add some more
instrumentation, so we can see what holds off softirqs from being
processed. If the ksoftirqd enforcement is disabled, then the only reason
can be a long lasting softirq disabled region. Tracing should tell.
Thanks,
tglx
Powered by blists - more mailing lists