[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <870de0d9-5c08-9431-a46a-33ee4d3a6617@gmail.com>
Date: Thu, 18 Jul 2019 17:08:20 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Sudip Mukherjee <sudipm.mukherjee@...il.com>,
Eric Dumazet <eric.dumazet@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"David S. Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: regression with napi/softirq ?
On 7/18/19 2:55 PM, Sudip Mukherjee wrote:
> Thanks Eric. But there is no improvement in delay between
> softirq_raise and softirq_entry with this change.
> But moving to a later kernel (linus master branch? ) like Thomas has
> said in the other mail might be difficult atm. I can definitely
> move to v4.14.133 if that helps. Thomas ?
If you are tracking max latency then I guess you have to tweak SOFTIRQ_NOW_MASK
to include NET_RX_SOFTIRQ
The patch I gave earlier would only lower the probability of events, not completely get rid of them.
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 0427a86743a46b7e1891f7b6c1ff585a8a1695f5..302046dd8d7e6740e466c422954f22565fe19e69 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -81,7 +81,7 @@ static void wakeup_softirqd(void)
* right now. Let ksoftirqd handle this at its own rate, to get fairness,
* unless we're doing some of the synchronous softirqs.
*/
-#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ))
+#define SOFTIRQ_NOW_MASK ((1 << HI_SOFTIRQ) | (1 << TASKLET_SOFTIRQ) | (1 << NET_RX_SOFTIRQ))
static bool ksoftirqd_running(unsigned long pending)
{
struct task_struct *tsk = __this_cpu_read(ksoftirqd);
Powered by blists - more mailing lists