[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cdc215f94d118d691d73df35275022331156fb45.1464130360.git.luto@kernel.org>
Date: Tue, 24 May 2016 15:54:04 -0700
From: Andy Lutomirski <luto@...nel.org>
To: x86@...nel.org
Cc: linux-kernel@...r.kernel.org, Borislav Petkov <bp@...en8.de>,
Andi Kleen <andi@...stfloor.org>,
Oleg Nesterov <oleg@...hat.com>,
Andy Lutomirski <luto@...nel.org>, stable@...r.kernel.org
Subject: [PATCH v2] x86/traps: Don't force in_interrupt() to return true in IST handlers
Forcing in_interrupt() to return true if we're not in a bona fide
interrupt confuses the softirq code. This fixes warnings like:
NOHZ: local_softirq_pending 282
that can happen when running things like selftests/x86.
This will change perf's static percpu buffer usage in IST context.
I think this is okay, and it's changing the behavior to match
historical (pre-4.0) behavior.
Cc: stable@...r.kernel.org
Fixes: 959274753857 ("x86, traps: Track entry into and exit from IST context")
Signed-off-by: Andy Lutomirski <luto@...nel.org>
Message-Id: <ce1cd1d47b486c142d57a9a8189470a1c3809a9c.1464062136.git.luto@...nel.org>
---
This is intended for x86/urgent.
Changes from v1:
- Get rid of the 3* crap (Linus)
- Improve changelog (PeterZ)
arch/x86/kernel/traps.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 06cbe25861f1..87bd6b6bf5bd 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -95,6 +95,12 @@ static inline void cond_local_irq_disable(struct pt_regs *regs)
local_irq_disable();
}
+/*
+ * In IST context, we explicitly disable preemption. This serves two
+ * purposes: it makes it much less likely that we would accidentally
+ * schedule in IST context and it will force a warning if we somehow
+ * manage to schedule by accident.
+ */
void ist_enter(struct pt_regs *regs)
{
if (user_mode(regs)) {
@@ -109,13 +115,7 @@ void ist_enter(struct pt_regs *regs)
rcu_nmi_enter();
}
- /*
- * We are atomic because we're on the IST stack; or we're on
- * x86_32, in which case we still shouldn't schedule; or we're
- * on x86_64 and entered from user mode, in which case we're
- * still atomic unless ist_begin_non_atomic is called.
- */
- preempt_count_add(HARDIRQ_OFFSET);
+ preempt_disable();
/* This code is a bit fragile. Test it. */
RCU_LOCKDEP_WARN(!rcu_is_watching(), "ist_enter didn't work");
@@ -123,7 +123,7 @@ void ist_enter(struct pt_regs *regs)
void ist_exit(struct pt_regs *regs)
{
- preempt_count_sub(HARDIRQ_OFFSET);
+ preempt_enable_no_resched();
if (!user_mode(regs))
rcu_nmi_exit();
@@ -154,7 +154,7 @@ void ist_begin_non_atomic(struct pt_regs *regs)
BUG_ON((unsigned long)(current_top_of_stack() -
current_stack_pointer()) >= THREAD_SIZE);
- preempt_count_sub(HARDIRQ_OFFSET);
+ preempt_enable_no_resched();
}
/**
@@ -164,7 +164,7 @@ void ist_begin_non_atomic(struct pt_regs *regs)
*/
void ist_end_non_atomic(void)
{
- preempt_count_add(HARDIRQ_OFFSET);
+ preempt_disable();
}
static nokprobe_inline int
--
2.5.5
Powered by blists - more mailing lists