[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1401431772-14320-10-git-send-email-yuyang.du@intel.com>
Date: Fri, 30 May 2014 14:36:05 +0800
From: Yuyang Du <yuyang.du@...el.com>
To: mingo@...hat.com, peterz@...radead.org, rafael.j.wysocki@...el.com,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Cc: arjan.van.de.ven@...el.com, len.brown@...el.com,
alan.cox@...el.com, mark.gross@...el.com, pjt@...gle.com,
bsegall@...gle.com, morten.rasmussen@....com,
vincent.guittot@...aro.org, rajeev.d.muralidhar@...el.com,
vishwesh.m.rudramuni@...el.com, nicole.chalhoub@...el.com,
ajaya.durg@...el.com, harinarayanan.seshadri@...el.com,
jacob.jun.pan@...ux.intel.com, fengguang.wu@...el.com,
yuyang.du@...el.com
Subject: [RFC PATCH 09/16 v3] Define and allocate a per CPU local cpumask for Workload Consolidation
We need these cpumasks to aid in cosolidated load balancing
Signed-off-by: Yuyang Du <yuyang.du@...el.com>
---
kernel/sched/fair.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 96d6f64..5755746 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6638,6 +6638,8 @@ out:
return ld_moved;
}
+static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask);
+
/*
* idle_balance is called by schedule() if this_cpu is about to become
* idle. Attempts to pull tasks from other CPUs.
@@ -7692,6 +7694,12 @@ void print_cfs_stats(struct seq_file *m, int cpu)
__init void init_sched_fair_class(void)
{
#ifdef CONFIG_SMP
+ unsigned int i;
+ for_each_possible_cpu(i) {
+ zalloc_cpumask_var_node(&per_cpu(local_cpu_mask, i),
+ GFP_KERNEL, cpu_to_node(i));
+ }
+
open_softirq(SCHED_SOFTIRQ, run_rebalance_domains);
#ifdef CONFIG_NO_HZ_COMMON
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists