lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150929105603.GG11639@twins.programming.kicks-ass.net>
Date:	Tue, 29 Sep 2015 12:56:03 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	mingo@...nel.org
Cc:	linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
	fweisbec@...il.com, oleg@...hat.com, umgwanakikbuti@...il.com,
	tglx@...utronix.de, rostedt@...dmis.org
Subject: [RFC][PATCH v2 12/11] sched: Add preempt_count invariant check


Ingo requested I keep my debug check for the preempt_count invariant.

Requested-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 include/asm-generic/preempt.h |    2 +-
 include/linux/sched.h         |   17 +++++++++--------
 kernel/sched/core.c           |    5 +++++
 3 files changed, 15 insertions(+), 9 deletions(-)

--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -24,7 +24,7 @@ static __always_inline void preempt_coun
  * must be macros to avoid header recursion hell
  */
 #define init_task_preempt_count(p) do { \
-	task_thread_info(p)->preempt_count = 2*PREEMPT_DISABLED; \
+	task_thread_info(p)->preempt_count = INIT_PREEMPT_COUNT; \
 } while (0)
 
 #define init_idle_preempt_count(p, cpu) do { \
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -599,17 +599,18 @@ struct task_cputime_atomic {
 		.sum_exec_runtime = ATOMIC64_INIT(0),		\
 	}
 
-#ifdef CONFIG_PREEMPT_COUNT
-#define PREEMPT_DISABLED	(1 + PREEMPT_ENABLED)
-#else
-#define PREEMPT_DISABLED	PREEMPT_ENABLED
-#endif
+#define PREEMPT_DISABLED	(PREEMPT_DISABLE_OFFSET + PREEMPT_ENABLED)
 
 /*
- * Disable preemption until the scheduler is running.
- * Reset by start_kernel()->sched_init()->init_idle().
+ * Initial preempt_count value; reflects the preempt_count schedule invariant
+ * which states that during schedule preempt_count() == 2.
+ *
+ * This also results in the kernel starting with preemption disabled until
+ * the scheduler is initialized, see:
+ *
+ *   start_kernel()->sched_init()->init_idle().
  */
-#define INIT_PREEMPT_COUNT	PREEMPT_DISABLED
+#define INIT_PREEMPT_COUNT	(2*PREEMPT_DISABLE_OFFSET + PREEMPT_ENABLED)
 
 /**
  * struct thread_group_cputimer - thread group interval timer counts
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2503,6 +2503,11 @@ static struct rq *finish_task_switch(str
 	struct mm_struct *mm = rq->prev_mm;
 	long prev_state;
 
+	if (unlikely(WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
+			       "corrupted preempt_count: %s/%d/0x%x\n",
+			       current->comm, current->pid, preempt_count())))
+		preempt_count_set(INIT_PREEMPT_COUNT);
+
 	rq->prev_mm = NULL;
 
 	/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ