[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190228171242.32144-36-frederic@kernel.org>
Date: Thu, 28 Feb 2019 18:12:40 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"David S . Miller" <davem@...emloft.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mauro Carvalho Chehab <mchehab+samsung@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Pavan Kondeti <pkondeti@...eaurora.org>,
Ingo Molnar <mingo@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>
Subject: [PATCH 35/37] softirq: Allow to soft interrupt vector-specific masked contexts
Remove the old protections that prevented softirqs from interrupting any
softirq-disabled context. Now that we can disable specific vectors on
a given piece of code, we want to be able to soft-interrupt those places
with other vectors.
Reviewed-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Cc: Mauro Carvalho Chehab <mchehab+samsung@...nel.org>
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: Paul E . McKenney <paulmck@...ux.vnet.ibm.com>
Cc: David S . Miller <davem@...emloft.net>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
---
kernel/softirq.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index bb841e5d9951..95156afb768f 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -240,7 +240,7 @@ static void local_bh_enable_ip_mask(unsigned long ip, unsigned int cnt,
*/
preempt_count_sub(cnt - 1);
- if (unlikely(!in_interrupt() && softirq_pending_enabled())) {
+ if (unlikely(softirq_pending_enabled())) {
/*
* Run softirq if any pending. And do it in its own stack
* as we may be calling this deep in a task call stack already.
@@ -390,7 +390,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
lockdep_softirq_end(in_hardirq);
account_irq_exit_time(current);
__local_bh_enable_no_softirq(SOFTIRQ_OFFSET);
- WARN_ON_ONCE(in_interrupt());
+ WARN_ON_ONCE(in_irq());
current_restore_flags(old_flags, PF_MEMALLOC);
}
@@ -399,7 +399,7 @@ asmlinkage __visible void do_softirq(void)
__u32 pending;
unsigned long flags;
- if (in_interrupt())
+ if (in_irq())
return;
local_irq_save(flags);
@@ -482,7 +482,7 @@ void irq_exit(void)
#endif
account_irq_exit_time(current);
preempt_count_sub(HARDIRQ_OFFSET);
- if (!in_interrupt() && softirq_pending_enabled())
+ if (!in_irq() && softirq_pending_enabled())
invoke_softirq();
tick_irq_exit();
@@ -506,7 +506,7 @@ inline void raise_softirq_irqoff(unsigned int nr)
* Otherwise we wake up ksoftirqd to make sure we
* schedule the softirq soon.
*/
- if (!in_interrupt())
+ if (!in_irq())
wakeup_softirqd();
}
--
2.21.0
Powered by blists - more mailing lists