[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1473056403-7877-1-git-send-email-chengchao@kedacom.com>
Date: Mon, 5 Sep 2016 14:20:03 +0800
From: cheng chao <chengchao@...acom.com>
To: mingo@...nel.org, oleg@...hat.com, peterz@...radead.org,
tj@...nel.org, akpm@...ux-foundation.org, chris@...is-wilson.co.uk
Cc: linux-kernel@...r.kernel.org, cheng chao <chengchao@...acom.com>
Subject: [PATCH] sched/core: simpler function for sched_exec migration
when sched_exec needs migration and CONFIG_PREEMPT_NONE=y,
migration_cpu_stop almost does nothing due to
the caller is !task_on_rq_queued().
currently CONFIG_PREEMPT and CONFIG_PREEMPT_VOLUNTARY work well because
the caller keeps task_on_rq_queued():
1. when CONFIG_PREEMPT=y
stop_one_cpu
->cpu_stop_queue_work
->spin_unlock_irqrestore (preempt_enable calls __preempt_schedule)
2. when CONFIG_PREEMPT_VOLUNTARY=y
stop_one_cpu
->wait_for_completion
->...
-->might_sleep() (calls _cond_resched()
stop_one_cpu_sync is introduced here to address this problem,more further
it makes more simpler for CONFIG_PREEMPT=y or CONFIG_PREEMPT_VOLUNTARY=y
when sched_exec needs migration.
Signed-off-by: cheng chao <chengchao@...acom.com>
---
include/linux/stop_machine.h | 1 +
kernel/sched/core.c | 2 +-
kernel/stop_machine.c | 21 +++++++++++++++++++++
3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
index 3cc9632..e4e7d42 100644
--- a/include/linux/stop_machine.h
+++ b/include/linux/stop_machine.h
@@ -28,6 +28,7 @@ struct cpu_stop_work {
};
int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg);
+void stop_one_cpu_sync(unsigned int cpu, cpu_stop_fn_t fn, void *arg);
int stop_two_cpus(unsigned int cpu1, unsigned int cpu2, cpu_stop_fn_t fn, void *arg);
bool stop_one_cpu_nowait(unsigned int cpu, cpu_stop_fn_t fn, void *arg,
struct cpu_stop_work *work_buf);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 556cb07..2fd71e6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2958,7 +2958,7 @@ void sched_exec(void)
struct migration_arg arg = { p, dest_cpu };
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
- stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
+ stop_one_cpu_sync(task_cpu(p), migration_cpu_stop, &arg);
return;
}
unlock:
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 4a1ca5f..24f8637 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -130,6 +130,27 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
return done.ret;
}
+/**
+ * the caller keeps task_on_rq_queued, so it's more suitable for
+ * sched_exec on the case when needs migration
+ */
+void stop_one_cpu_sync(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
+{
+ struct cpu_stop_work work = { .fn = fn, .arg = arg, .done = NULL };
+
+ if (!cpu_stop_queue_work(cpu, &work))
+ return;
+
+#if defined(CONFIG_PREEMPT_NONE) || defined(CONFIG_PREEMPT_VOLUNTARY)
+ /*
+ * CONFIG_PREEMPT doesn't need call schedule here, because
+ * preempt_enable already does the similar thing when call
+ * cpu_stop_queue_work
+ */
+ schedule();
+#endif
+}
+
/* This controls the threads on each CPU. */
enum multi_stop_state {
/* Dummy starting state for thread. */
--
2.4.11
Powered by blists - more mailing lists