[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <77973e75a10bf7ef9b33c664544667deee9e1a8e.1607036601.git.reinette.chatre@intel.com>
Date: Thu, 3 Dec 2020 15:25:48 -0800
From: Reinette Chatre <reinette.chatre@...el.com>
To: tglx@...utronix.de, fenghua.yu@...el.com, bp@...en8.de,
tony.luck@...el.com
Cc: kuo-lang.tseng@...el.com, shakeelb@...gle.com,
valentin.schneider@....com, mingo@...hat.com, babu.moger@....com,
james.morse@....com, hpa@...or.com, x86@...nel.org,
linux-kernel@...r.kernel.org,
Reinette Chatre <reinette.chatre@...el.com>,
stable@...r.kernel.org
Subject: [PATCH 1/3] x86/resctrl: Move setting task's active CPU in a mask into helpers
From: Fenghua Yu <fenghua.yu@...el.com>
The code of setting the CPU on which a task is running in a CPU mask is
moved into a couple of helpers. The new helper task_on_cpu() will be
reused shortly.
Signed-off-by: Fenghua Yu <fenghua.yu@...el.com>
Signed-off-by: Reinette Chatre <reinette.chatre@...el.com>
Reviewed-by: Tony Luck <tony.luck@...el.com>
Cc: stable@...r.kernel.org
---
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 47 +++++++++++++++++++-------
1 file changed, 34 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 6f4ca4bea625..68db7d2dec8f 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -525,6 +525,38 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp)
kfree(rdtgrp);
}
+#ifdef CONFIG_SMP
+/* Get the CPU if the task is on it. */
+static bool task_on_cpu(struct task_struct *t, int *cpu)
+{
+ /*
+ * This is safe on x86 w/o barriers as the ordering of writing to
+ * task_cpu() and t->on_cpu is reverse to the reading here. The
+ * detection is inaccurate as tasks might move or schedule before
+ * the smp function call takes place. In such a case the function
+ * call is pointless, but there is no other side effect.
+ */
+ if (t->on_cpu) {
+ *cpu = task_cpu(t);
+
+ return true;
+ }
+
+ return false;
+}
+
+static void set_task_cpumask(struct task_struct *t, struct cpumask *mask)
+{
+ int cpu;
+
+ if (mask && task_on_cpu(t, &cpu))
+ cpumask_set_cpu(cpu, mask);
+}
+#else
+static inline void
+set_task_cpumask(struct task_struct *t, struct cpumask *mask) { }
+#endif
+
struct task_move_callback {
struct callback_head work;
struct rdtgroup *rdtgrp;
@@ -2327,19 +2359,8 @@ static void rdt_move_group_tasks(struct rdtgroup *from, struct rdtgroup *to,
t->closid = to->closid;
t->rmid = to->mon.rmid;
-#ifdef CONFIG_SMP
- /*
- * This is safe on x86 w/o barriers as the ordering
- * of writing to task_cpu() and t->on_cpu is
- * reverse to the reading here. The detection is
- * inaccurate as tasks might move or schedule
- * before the smp function call takes place. In
- * such a case the function call is pointless, but
- * there is no other side effect.
- */
- if (mask && t->on_cpu)
- cpumask_set_cpu(task_cpu(t), mask);
-#endif
+ /* If the task is on a CPU, set the CPU in the mask. */
+ set_task_cpumask(t, mask);
}
}
read_unlock(&tasklist_lock);
--
2.26.2
Powered by blists - more mailing lists