[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1400869003-27769-17-git-send-email-morten.rasmussen@arm.com>
Date: Fri, 23 May 2014 19:16:43 +0100
From: Morten Rasmussen <morten.rasmussen@....com>
To: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
peterz@...radead.org, mingo@...nel.org
Cc: rjw@...ysocki.net, vincent.guittot@...aro.org,
daniel.lezcano@...aro.org, preeti@...ux.vnet.ibm.com,
dietmar.eggemann@....com
Subject: [RFC PATCH 16/16] sched: Disable wake_affine to broaden the scope of wakeup target cpus
SD_WAKE_AFFINE is currently set by default on all levels which means
that wakeups are always handled inside the lowest level sched_domain.
That means a tiny periodic task is very likely to stay on the cpu it was
forked on forever. To save energy we need to revisit the task placement
decision every now and again to ensure that we don't keep waking the
same cpu if there are cheaper alternatives.
One way is to simply disable wake_affine and rely on the fork/exec
balancing mechanism (find_idlest_{group, cpu}). This is what this patch
does.
An alternative is to let the platform remove the SD_WAKE_AFFINE flag
from lower levels to increase the search space for
select_idle_sibling().
Signed-off-by: Morten Rasmussen <morten.rasmussen@....com>
---
kernel/sched/core.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 49b895a..eeb0508 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6069,8 +6069,13 @@ sd_init(struct sched_domain_topology_level *tl, int cpu)
| 1*SD_BALANCE_NEWIDLE
| 1*SD_BALANCE_EXEC
| 1*SD_BALANCE_FORK
+#ifdef CONFIG_SCHED_ENERGY
+ | 1*SD_BALANCE_WAKE
+ | 0*SD_WAKE_AFFINE
+#else
| 0*SD_BALANCE_WAKE
| 1*SD_WAKE_AFFINE
+#endif
| 0*SD_SHARE_CPUPOWER
| 0*SD_SHARE_PKG_RESOURCES
| 0*SD_SERIALIZE
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists