[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210418221751.7edfc03b@imladris.surriel.com>
Date: Sun, 18 Apr 2021 22:17:51 -0400
From: Rik van Riel <riel@...riel.com>
To: linux-kernel@...r.kernel.org
Cc: kernel-team@...com, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Mel Gorman <mgorman@...e.de>
Subject: [PATCH] sched,fair: skip newidle_balance if a wakeup is pending
The try_to_wake_up function has an optimization where it can queue
a task for wakeup on its previous CPU, if the task is still in the
middle of going to sleep inside schedule().
Once schedule() re-enables IRQs, the task will be woken up with an
IPI, and placed back on the runqueue.
If we have such a wakeup pending, there is no need to search other
CPUs for runnable tasks. Just skip (or bail out early from) newidle
balancing, and run the just woken up task.
For a memcache like workload test, this reduces total CPU use by
about 2%, proportionally split between user and system time,
and p99 and p95 application response time by 2-3% on average.
The schedstats run_delay number shows a similar improvement.
Signed-off-by: Rik van Riel <riel@...riel.com>
---
kernel/sched/fair.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 69680158963f..19a92c48939f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7163,6 +7163,14 @@ done: __maybe_unused;
if (!rf)
return NULL;
+ /*
+ * We have a woken up task pending here. No need to search for ones
+ * elsewhere. This task will be enqueued the moment we unblock irqs
+ * upon exiting the scheduler.
+ */
+ if (rq->ttwu_pending)
+ return NULL;
+
new_tasks = newidle_balance(rq, rf);
/*
@@ -10661,7 +10669,8 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
* Stop searching for tasks to pull if there are
* now runnable tasks on this rq.
*/
- if (pulled_task || this_rq->nr_running > 0)
+ if (pulled_task || this_rq->nr_running > 0 ||
+ this_rq->ttwu_pending)
break;
}
rcu_read_unlock();
--
2.25.4
Powered by blists - more mailing lists