[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200915115609.85106-3-qianjun.kernel@gmail.com>
Date: Tue, 15 Sep 2020 19:56:07 +0800
From: qianjun.kernel@...il.com
To: tglx@...utronix.de, peterz@...radead.org, will@...nel.org,
luto@...nel.org, linux-kernel@...r.kernel.org
Cc: laoar.shao@...il.com, qais.yousef@....com, urezki@...il.com,
jun qian <qianjun.kernel@...il.com>
Subject: [PATCH V7 2/4] softirq: Factor loop termination condition
From: jun qian <qianjun.kernel@...il.com>
Add a function interface to end softirq process
Signed-off-by: jun qian <qianjun.kernel@...il.com>
---
kernel/softirq.c | 44 +++++++++++++++++++++++++-------------------
1 file changed, 25 insertions(+), 19 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 0db77ab..5a627cd 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -198,22 +198,6 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
}
EXPORT_SYMBOL(__local_bh_enable_ip);
-/*
- * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
- * but break the loop if need_resched() is set or after 2 ms.
- * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in
- * certain cases, such as stop_machine(), jiffies may cease to
- * increment and so we need the MAX_SOFTIRQ_RESTART limit as
- * well to make sure we eventually return from this method.
- *
- * These limits have been established via experimentation.
- * The two things to balance is latency against fairness -
- * we want to handle softirqs as soon as possible, but they
- * should not be able to lock up the box.
- */
-#define MAX_SOFTIRQ_TIME (2 * NSEC_PER_MSEC)
-#define MAX_SOFTIRQ_RESTART 10
-
#ifdef CONFIG_TRACE_IRQFLAGS
/*
* When we run softirqs from irq_exit() and thus on the hardirq stack we need
@@ -247,11 +231,34 @@ static inline void lockdep_softirq_end(bool in_hardirq)
static inline void lockdep_softirq_end(bool in_hardirq) { }
#endif
+/*
+ * We restart softirq processing but break the loop if need_resched() is
+ * set or after 2 ms. The MAX_SOFTIRQ_RESTART guarantees a loop termination
+ * if sched_clock() were ever to stall.
+ *
+ * These limits have been established via experimentation. The two things
+ * to balance is latency against fairness - we want to handle softirqs as
+ * soon as possible, but they should not be able to lock up the box.
+ */
+#define MAX_SOFTIRQ_TIME (2 * NSEC_PER_MSEC)
+#define MAX_SOFTIRQ_RESTART 10
+
+static inline bool __softirq_needs_break(u64 start)
+{
+ if (need_resched())
+ return true;
+
+ if (sched_clock() - start >= MAX_SOFTIRQ_TIME)
+ return true;
+
+ return false;
+}
+
asmlinkage __visible void __softirq_entry __do_softirq(void)
{
u64 start = sched_clock();
unsigned long old_flags = current->flags;
- int max_restart = MAX_SOFTIRQ_RESTART;
+ unsigned int max_restart = MAX_SOFTIRQ_RESTART;
struct softirq_action *h;
bool in_hardirq;
__u32 pending;
@@ -308,8 +315,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
pending = local_softirq_pending();
if (pending) {
- if (sched_clock() - start < MAX_SOFTIRQ_TIME && !need_resched() &&
- --max_restart)
+ if (!__softirq_needs_break(start) && --max_restart)
goto restart;
wakeup_softirqd();
--
1.8.3.1
Powered by blists - more mailing lists