Subject: softirq: Factor loop termination condition From: Peter Zijlstra Date: Fri Sep 11 17:17:20 CEST 2020 From: Peter Zijlstra Invidiual soft interrupts can run longer than the timeout, but the loop termination conditions (timeout or need_resched()) are only evaluated after processing all pending bits. As a preparatory step to allow breaking the loop after each processed pending bit, factor out the termination condition into helper functions. [ tglx: Split the function, adopt to previous changes and update change log ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner --- kernel/softirq.c | 40 +++++++++++++++++++++++----------------- 1 file changed, 23 insertions(+), 17 deletions(-) --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -204,22 +204,6 @@ void __local_bh_enable_ip(unsigned long } EXPORT_SYMBOL(__local_bh_enable_ip); -/* - * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times, - * but break the loop if need_resched() is set or after 2 ms. - * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in - * certain cases, such as stop_machine(), jiffies may cease to - * increment and so we need the MAX_SOFTIRQ_RESTART limit as - * well to make sure we eventually return from this method. - * - * These limits have been established via experimentation. - * The two things to balance is latency against fairness - - * we want to handle softirqs as soon as possible, but they - * should not be able to lock up the box. - */ -#define MAX_SOFTIRQ_TIME (2 * NSEC_PER_MSEC) -#define MAX_SOFTIRQ_RESTART 10 - #ifdef CONFIG_TRACE_IRQFLAGS /* * When we run softirqs from irq_exit() and thus on the hardirq stack we need @@ -253,6 +237,28 @@ static inline bool lockdep_softirq_start static inline void lockdep_softirq_end(bool in_hardirq) { } #endif +/* + * We restart softirq processing but break the loop if need_resched() is set or + * after 2 ms. The MAX_SOFTIRQ_RESTART guarantees a loop termination if + * sched_clock() were ever to stall. + * + * These limits have been established via experimentation. The two things to + * balance is latency against fairness - we want to handle softirqs as soon as + * possible, but they should not be able to lock up the box. + */ +#define MAX_SOFTIRQ_TIME (2 * NSEC_PER_MSEC) +#define MAX_SOFTIRQ_RESTART 10 + +static inline bool __softirq_timeout(u64 tbreak) +{ + return sched_clock() >= tbreak; +} + +static inline bool __softirq_needs_break(u64 tbreak) +{ + return need_resched() || __softirq_timeout(tbreak); +} + asmlinkage __visible void __softirq_entry __do_softirq(void) { unsigned int vec_nr, max_restart = MAX_SOFTIRQ_RESTART; @@ -306,7 +312,7 @@ asmlinkage __visible void __softirq_entr pending = local_softirq_pending(); if (pending) { - if (sched_clock() < tbreak && !need_resched() && --max_restart) + if (!__softirq_needs_break(tbreak) && --max_restart) goto restart; wakeup_softirqd();