[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170907091338.orwxrqkbfkki3c24@hirez.programming.kicks-ass.net>
Date: Thu, 7 Sep 2017 11:13:38 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mike Galbraith <efault@....de>
Cc: Andy Lutomirski <luto@...capital.net>,
Andy Lutomirski <luto@...nel.org>,
Ingo Molnar <mingo@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Tejun Heo <tj@...nel.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH] sched/cpuset/pm: Fix cpuset vs suspend-resume
On Thu, Sep 07, 2017 at 08:15:29AM +0200, Mike Galbraith wrote:
> > I'd be suspicious of these here commits:
> >
> > 77d1dfda0e79 ("sched/topology, cpuset: Avoid spurious/wrong domain rebuilds")
>
> Seems to be that one. After suspend/resume, cpu7 (i4790+SMT) is online
> with NULL domain. offline/online it, it grows proper domains.
Shiny.. and *urgh*.
So if I offline cpus 4-39 (otherwise too much output) and then do:
# echo processors > /sys/power/pm_test
# echo disk > /sys/power/state
I get:
[ 568.315315] Disabling non-boot CPUs ...
[ 568.329059] CPU0 attaching NULL sched-domain.
[ 568.333942] CPU1 attaching NULL sched-domain.
[ 568.338805] CPU2 attaching NULL sched-domain.
[ 568.343675] CPU3 attaching NULL sched-domain.
[ 568.348645] CPU0 attaching sched-domain(s):
[ 568.353322] domain-0: span=0,2-3 level=MC
[ 568.357924] groups: 0:{ span=0 }, 2:{ span=2 }, 3:{ span=3 }
[ 568.364454] CPU2 attaching sched-domain(s):
[ 568.369125] domain-0: span=0,2-3 level=MC
[ 568.373699] groups: 2:{ span=2 }, 3:{ span=3 }, 0:{ span=0 }
[ 568.380224] CPU3 attaching sched-domain(s):
[ 568.384896] domain-0: span=0,2-3 level=MC
[ 568.389471] groups: 3:{ span=3 }, 0:{ span=0 }, 2:{ span=2 }
[ 568.448127] smpboot: CPU 1 is now offline
[ 568.462083] CPU0 attaching NULL sched-domain.
[ 568.466972] CPU2 attaching NULL sched-domain.
[ 568.471850] CPU3 attaching NULL sched-domain.
[ 568.476855] CPU0 attaching sched-domain(s):
[ 568.481551] domain-0: span=0,3 level=MC
[ 568.485986] groups: 0:{ span=0 }, 3:{ span=3 }
[ 568.491178] CPU3 attaching sched-domain(s):
[ 568.495864] domain-0: span=0,3 level=MC
[ 568.500252] groups: 3:{ span=3 }, 0:{ span=0 }
[ 568.522159] smpboot: CPU 2 is now offline
[ 568.537075] CPU0 attaching NULL sched-domain.
[ 568.541969] CPU3 attaching NULL sched-domain.
[ 568.546947] CPU0 attaching NULL sched-domain.
[ 568.558865] smpboot: CPU 3 is now offline
[ 568.563955] PM: hibernation debug: Waiting for 5 seconds.
[ 573.570008] Enabling non-boot CPUs ...
[ 573.594298] CPU0 attaching NULL sched-domain.
[ 573.599216] CPU0 attaching sched-domain(s):
[ 573.603891] domain-0: span=0-1 level=MC
[ 573.608281] groups: 0:{ span=0 }, 1:{ span=1 }
[ 573.613455] CPU1 attaching sched-domain(s):
[ 573.618127] domain-0: span=0-1 level=MC
[ 573.622509] groups: 1:{ span=1 }, 0:{ span=0 }
[ 573.627668] span: 0-1 (max cpu_capacity = 1024)
[ 573.632799] CPU1 is up
[ 573.650551] CPU0 attaching NULL sched-domain.
[ 573.655421] CPU1 attaching NULL sched-domain.
[ 573.660340] CPU0 attaching sched-domain(s):
[ 573.665012] domain-0: span=0-2 level=MC
[ 573.669402] groups: 0:{ span=0 }, 1:{ span=1 }, 2:{ span=2 }
[ 573.675930] CPU1 attaching sched-domain(s):
[ 573.680601] domain-0: span=0-2 level=MC
[ 573.684982] groups: 1:{ span=1 }, 2:{ span=2 }, 0:{ span=0 }
[ 573.691496] CPU2 attaching sched-domain(s):
[ 573.696166] domain-0: span=0-2 level=MC
[ 573.700546] groups: 2:{ span=2 }, 0:{ span=0 }, 1:{ span=1 }
[ 573.707064] span: 0-2 (max cpu_capacity = 1024)
[ 573.712198] CPU2 is up
[ 573.730022] CPU3 is up
[ 575.913686] Restarting tasks ... done.
Its failing to rebuild domains for CPU3
And I can see how that happens, note how cpuset_cpu_active() doesn't
build an identity domain for the last CPU but relies on
cpuset_update_active_cpus() to create the proper full cpuset
configuration.
_However_, since cpuset_cpu_inactive() did _NOT_ call
cpuset_update_active_cpus(), top_cpuset.effective_cpus wasn't updated,
and now cpuset_hotplug_workfn() will find the active mask matching and
not in fact update anything.
So this was already broken if you'd had an actual cpuset configuration,
it would never restore the proper cpuset domains and retain the identity
domain setup created during resume.
What's worse, we don't seem to wait for completion of the
cpuset_hotplug_workfn() before unfreezing userspace.
---
Subject: sched/cpuset/pm: Fix cpuset vs suspend-resume
Cpusets vs suspend-resume is _completely_ broken. And it got noticed
because it now resulted in non-cpuset usage breaking too.
On suspend cpuset_cpu_inactive() doesn't call into
cpuset_update_active_cpus() because it doesn't want to move tasks about,
there is no need, all tasks are frozen and won't run again until after
we've resumed everything.
But this means that when we finally do call into
cpuset_update_active_cpus() after resuming the last frozen cpu in
cpuset_cpu_active(), the top_cpuset will not have any difference with
the cpu_active_mask and this it will not in fact do _anything_.
So the cpuset configuration will not be restored. This was largely
hidden because we would unconditionally create identity domains and
mobile users would not in fact use cpusets much. And servers what do use
cpusets tend to not suspend-resume much.
An addition problem is that we'd not in fact wait for the cpuset work to
finish before resuming the tasks, allowing spurious migrations outside
of the specified domains.
Fix the rebuild by introducing cpuset_force_rebuild() and fix the
ordering with cpuset_wait_for_hotplug().
Cc: tj@...nel.org
Cc: rjw@...ysocki.net
Cc: efault@....de
Reported-by: Andy Lutomirski <luto@...nel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
include/linux/cpuset.h | 6 ++++++
kernel/cgroup/cpuset.c | 16 +++++++++++++++-
kernel/power/process.c | 5 ++++-
kernel/sched/core.c | 7 +++----
4 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index e74655d941b7..a1e6a33a4b03 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -51,7 +51,9 @@ static inline void cpuset_dec(void)
extern int cpuset_init(void);
extern void cpuset_init_smp(void);
+extern void cpuset_force_rebuild(void);
extern void cpuset_update_active_cpus(void);
+extern void cpuset_wait_for_hotplug(void);
extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
extern void cpuset_cpus_allowed_fallback(struct task_struct *p);
extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
@@ -164,11 +166,15 @@ static inline bool cpusets_enabled(void) { return false; }
static inline int cpuset_init(void) { return 0; }
static inline void cpuset_init_smp(void) {}
+static inline void cpuset_force_rebuild(void) { }
+
static inline void cpuset_update_active_cpus(void)
{
partition_sched_domains(1, NULL, NULL);
}
+static inline void cpuset_wait_for_hotplug(void) { }
+
static inline void cpuset_cpus_allowed(struct task_struct *p,
struct cpumask *mask)
{
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 2f4039bafebb..0513ee39698b 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2267,6 +2267,13 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs)
mutex_unlock(&cpuset_mutex);
}
+static bool force_rebuild;
+
+void cpuset_force_rebuild(void)
+{
+ force_rebuild = true;
+}
+
/**
* cpuset_hotplug_workfn - handle CPU/memory hotunplug for a cpuset
*
@@ -2341,8 +2348,10 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
}
/* rebuild sched domains if cpus_allowed has changed */
- if (cpus_updated)
+ if (cpus_updated || force_rebuild) {
+ force_rebuild = false;
rebuild_sched_domains();
+ }
}
void cpuset_update_active_cpus(void)
@@ -2355,6 +2364,11 @@ void cpuset_update_active_cpus(void)
schedule_work(&cpuset_hotplug_work);
}
+void cpuset_wait_for_hotplug(void)
+{
+ flush_work(&cpuset_hotplug_work);
+}
+
/*
* Keep top_cpuset.mems_allowed tracking node_states[N_MEMORY].
* Call this routine anytime after node_states[N_MEMORY] changes.
diff --git a/kernel/power/process.c b/kernel/power/process.c
index 78672d324a6e..50f25cb370c6 100644
--- a/kernel/power/process.c
+++ b/kernel/power/process.c
@@ -20,8 +20,9 @@
#include <linux/workqueue.h>
#include <linux/kmod.h>
#include <trace/events/power.h>
+#include <linux/cpuset.h>
-/*
+/*
* Timeout for stopping processes
*/
unsigned int __read_mostly freeze_timeout_msecs = 20 * MSEC_PER_SEC;
@@ -202,6 +203,8 @@ void thaw_processes(void)
__usermodehelper_set_disable_depth(UMH_FREEZING);
thaw_workqueues();
+ cpuset_wait_for_hotplug();
+
read_lock(&tasklist_lock);
for_each_process_thread(g, p) {
/* No other threads should have PF_SUSPEND_TASK set */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 703f5831738e..7ba7084ffc0a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5558,16 +5558,15 @@ static void cpuset_cpu_active(void)
* operation in the resume sequence, just build a single sched
* domain, ignoring cpusets.
*/
- num_cpus_frozen--;
- if (likely(num_cpus_frozen)) {
- partition_sched_domains(1, NULL, NULL);
+ partition_sched_domains(1, NULL, NULL);
+ if (--num_cpus_frozen)
return;
- }
/*
* This is the last CPU online operation. So fall through and
* restore the original sched domains by considering the
* cpuset configurations.
*/
+ cpuset_force_rebuild();
}
cpuset_update_active_cpus();
}
Powered by blists - more mailing lists