[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-bf89a304722f6904009499a31dc68ab9a5c9742e@git.kernel.org>
Date: Thu, 22 Sep 2016 06:59:24 -0700
From: tip-bot for Cheng Chao <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, oleg@...hat.com, tglx@...utronix.de,
mingo@...nel.org, hpa@...or.com, torvalds@...ux-foundation.org,
cs.os.kernel@...il.com, peterz@...radead.org
Subject: [tip:sched/core] stop_machine: Avoid a sleep and wakeup in
stop_one_cpu()
Commit-ID: bf89a304722f6904009499a31dc68ab9a5c9742e
Gitweb: http://git.kernel.org/tip/bf89a304722f6904009499a31dc68ab9a5c9742e
Author: Cheng Chao <cs.os.kernel@...il.com>
AuthorDate: Wed, 14 Sep 2016 10:01:50 +0800
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Thu, 22 Sep 2016 14:53:45 +0200
stop_machine: Avoid a sleep and wakeup in stop_one_cpu()
In case @cpu == smp_proccessor_id(), we can avoid a sleep+wakeup
cycle by doing a preemption.
Callers such as sched_exec() can benefit from this change.
Signed-off-by: Cheng Chao <cs.os.kernel@...il.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: akpm@...ux-foundation.org
Cc: chris@...is-wilson.co.uk
Cc: tj@...nel.org
Link: http://lkml.kernel.org/r/1473818510-6779-1-git-send-email-cs.os.kernel@gmail.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
kernel/sched/core.c | 8 ++++++--
kernel/stop_machine.c | 5 +++++
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c5f020c..ff4e3c0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1063,8 +1063,12 @@ static int migration_cpu_stop(void *data)
* holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
* we're holding p->pi_lock.
*/
- if (task_rq(p) == rq && task_on_rq_queued(p))
- rq = __migrate_task(rq, p, arg->dest_cpu);
+ if (task_rq(p) == rq) {
+ if (task_on_rq_queued(p))
+ rq = __migrate_task(rq, p, arg->dest_cpu);
+ else
+ p->wake_cpu = arg->dest_cpu;
+ }
raw_spin_unlock(&rq->lock);
raw_spin_unlock(&p->pi_lock);
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 4a1ca5f..082e71f 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -126,6 +126,11 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg)
cpu_stop_init_done(&done, 1);
if (!cpu_stop_queue_work(cpu, &work))
return -ENOENT;
+ /*
+ * In case @cpu == smp_proccessor_id() we can avoid a sleep+wakeup
+ * cycle by doing a preemption:
+ */
+ cond_resched();
wait_for_completion(&done.completion);
return done.ret;
}
Powered by blists - more mailing lists