lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170414084809.3dacde2a@gandalf.local.home>
Date:   Fri, 14 Apr 2017 08:48:09 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH v3] sched: Have do_idle() call __schedule() without 
 enabling preemption

From: Steven Rostedt (VMware) <rostedt@...dmis.org>

I finally got around to creating trampolines for dynamically allocated
ftrace_ops with using synchronize_rcu_tasks(). For users of the ftrace
function hook callbacks, like perf, that allocate the ftrace_ops
descriptor via kmalloc() and friends, ftrace was not able to optimize
the functions being traced to use a trampoline because they would also
need to be allocated dynamically. The problem is that they cannot be
freed when CONFIG_PREEMPT is set, as there's no way to tell if a task
was preempted on the trampoline. That was before Paul McKenney
implemented synchronize_rcu_tasks() that would make sure all tasks
(except idle) have scheduled out or have entered user space.

While testing this, I triggered this bug:

 BUG: unable to handle kernel paging request at ffffffffa0230077
 IP: 0xffffffffa0230077
 PGD 2414067 
 PUD 2415063 
 PMD c463c067 
 PTE 0
 
 Oops: 0010 [#1] PREEMPT SMP DEBUG_PAGEALLOC KASAN
 Modules linked in: [...]
 CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.11.0-rc3-test+ #356
 Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v02.05 05/07/2012
 task: ffff8800cebbd0c0 task.stack: ffff8800cebd8000
 RIP: 0010:0xffffffffa0230077
 RSP: 0018:ffff8800cebdfd80 EFLAGS: 00010286
 RAX: 0000000000000000 RBX: ffff8800cebbd0c0 RCX: ffffffff8121391e
 RDX: dffffc0000000000 RSI: ffffffff81ce7c78 RDI: ffff8800cebbf298
 RBP: ffff8800cebdfe28 R08: 0000000000000003 R09: 0000000000000000
 R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800cebbd0c0
 R13: ffffffff828fbe90 R14: ffff8800d392c480 R15: ffffffff826cc380
 FS:  0000000000000000(0000) GS:ffff8800d3900000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: ffffffffa0230077 CR3: 0000000002413000 CR4: 00000000001406e0
 Call Trace:
  ? debug_smp_processor_id+0x17/0x20
  ? sched_ttwu_pending+0x79/0x190
  ? schedule+0x5/0xe0
  ? trace_hardirqs_on_caller+0x182/0x280
  schedule+0x5/0xe0
  schedule_preempt_disabled+0x18/0x30
  ? schedule+0x5/0xe0
  ? schedule_preempt_disabled+0x18/0x30
  do_idle+0x172/0x220


What happened was that the idle task was preempted on the trampoline.
As synchronize_rcu_tasks() ignores the idle thread, there's nothing
that lets ftrace know that the idle task was preempted on a trampoline.

The idle task shouldn't need to ever enable preemption. The idle task
is simply a loop that calls schedule or places the cpu into idle mode.
In fact, having preemption enabled is inefficient, because it can
happen when idle is just about to call schedule anyway, which would
cause schedule to be called twice. Once for when the interrupt came in
and was returning back to normal context, and then again in the normal
path that the idle loop is running in, which would be pointless, as it
had already scheduled.

The only reason schedule_preempt_disable() enables preemption is to be
able to call sched_submit_work(), which requires preemption enabled. As
this is a nop when the task is in the RUNNING state, and idle is always
in the running state, there's no reason that idle needs to enable
preemption. But that means it cannot use schedule_preempt_disable() as
other callers of that function require calling sched_submit_work().

Adding a new function local to kernel/sched/ that allows idle to call
the scheduler without enabling preemption, fixes the
synchronize_rcu_tasks() issue, as well as removes the pointless spurious
schedule calls caused by interrupts happening in the brief window where
preemption is enabled just before it calls schedule.

Acked-by: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
---
Changes from v1:

  o Added Thomas to Cc as he wrote the original generic idle code

  o Added comment to why schedule_idle() exits

Changes from v2:

  o Updated change log and comments to reflect that schedule_idle does
    not call sched_submit_work()

  o Added WARN_ON_ONCE(current->state) to make sure the caller of
    schedule_idle() is always in the running state. Otherwise it should
    be calling schedule_preempt_disable() and sched_submit_work().

Index: linux-trace.git/kernel/sched/core.c
===================================================================
--- linux-trace.git.orig/kernel/sched/core.c
+++ linux-trace.git/kernel/sched/core.c
@@ -3502,6 +3502,31 @@ asmlinkage __visible void __sched schedu
 }
 EXPORT_SYMBOL(schedule);
 
+/*
+ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
+ * state (have scheduled out non-voluntarily) by making sure that all
+ * tasks have either left the run queue or have gone into user space.
+ * As idle tasks do not do either, they must not ever be preempted
+ * (schedule out non-voluntarily).
+ *
+ * schedule_idle() is similar to schedule_preempt_disable() except that it
+ * never enables preemption because it does not call sched_submit_work().
+ */
+void __sched schedule_idle(void)
+{
+	/*
+	 * As this skips calling sched_submit_work(), which the idle task does
+	 * regardless because that function is a nop when the task is in a
+	 * TASK_RUNNING state, make sure this isn't used someplace that the
+	 * current task can be in any other state. Note, idle is always in the
+	 * TASK_RUNNING state.
+	 */
+	WARN_ON_ONCE(current->state);
+	do {
+		__schedule(false);
+	} while (need_resched());
+}
+
 #ifdef CONFIG_CONTEXT_TRACKING
 asmlinkage __visible void __sched schedule_user(void)
 {
Index: linux-trace.git/kernel/sched/idle.c
===================================================================
--- linux-trace.git.orig/kernel/sched/idle.c
+++ linux-trace.git/kernel/sched/idle.c
@@ -264,7 +264,7 @@ static void do_idle(void)
 	smp_mb__after_atomic();
 
 	sched_ttwu_pending();
-	schedule_preempt_disabled();
+	schedule_idle();
 }
 
 bool cpu_in_idle(unsigned long pc)
Index: linux-trace.git/kernel/sched/sched.h
===================================================================
--- linux-trace.git.orig/kernel/sched/sched.h
+++ linux-trace.git/kernel/sched/sched.h
@@ -1465,6 +1465,8 @@ static inline struct cpuidle_state *idle
 }
 #endif
 
+extern void schedule_idle(void);
+
 extern void sysrq_sched_debug_show(void);
 extern void sched_init_granularity(void);
 extern void update_max_interval(void);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ