[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9116OLfP6GoZ0ez@slm.duckdns.org>
Date: Fri, 3 Feb 2023 11:00:24 -1000
From: Tejun Heo <tj@...nel.org>
To: Waiman Long <longman@...hat.com>
Cc: Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Will Deacon <will@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH] cgroup/cpuset: Don't filter offline CPUs in
cpuset_cpus_allowed() for top cpuset tasks
On Fri, Feb 03, 2023 at 11:40:40AM -0500, Waiman Long wrote:
> Since commit 8f9ea86fdf99 ("sched: Always preserve the user
> requested cpumask"), relax_compatible_cpus_allowed_ptr() is calling
> __sched_setaffinity() unconditionally. This helps to expose a bug in
> the current cpuset hotplug code where the cpumasks of the tasks in
> the top cpuset are not updated at all when some CPUs become online or
> offline. It is likely caused by the fact that some of the tasks in the
> top cpuset, like percpu kthreads, cannot have their cpu affinity changed.
>
> One way to reproduce this as suggested by Peter is:
> - boot machine
> - offline all CPUs except one
> - taskset -p ffffffff $$
> - online all CPUs
>
> Fix this by allowing cpuset_cpus_allowed() to return a wider mask that
> includes offline CPUs for those tasks that are in the top cpuset. For
> tasks not in the top cpuset, the old rule applies and only online CPUs
> will be returned in the mask since hotplug events will update their
> cpumasks accordingly.
>
> Fixes: 8f9ea86fdf99 ("sched: Always preserve the user requested cpumask")
> Reported-by: Will Deacon <will@...nel.org>
> Originally-from: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Waiman Long <longman@...hat.com>
So, this is the replacement for the first patch[1] Will posted, right?
> void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> {
> unsigned long flags;
> + struct cpuset *cs;
>
> spin_lock_irqsave(&callback_lock, flags);
> - guarantee_online_cpus(tsk, pmask);
> + rcu_read_lock();
> +
> + cs = task_cs(tsk);
> + if (cs != &top_cpuset)
> + guarantee_online_cpus(tsk, pmask);
> + /*
> + * TODO: Tasks in the top cpuset won't get update to their cpumasks
> + * when a hotplug online/offline event happens. So we include all
> + * offline cpus in the allowed cpu list.
> + */
> + if ((cs == &top_cpuset) || cpumask_empty(pmask)) {
> + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk);
> +
> + /*
> + * We first exclude cpus allocated to partitions. If there is no
> + * allowable online cpu left, we fall back to all possible cpus.
> + */
> + cpumask_andnot(pmask, possible_mask, top_cpuset.subparts_cpus);
and the differences are that
* It's only applied to the root cgroup.
* Cpus taken up by partitions are excluded.
Is my understanding correct?
> + if (!cpumask_intersects(pmask, cpu_online_mask))
> + cpumask_copy(pmask, possible_mask);
> + }
> +
> + rcu_read_unlock();
> spin_unlock_irqrestore(&callback_lock, flags);
So, I suppose you're suggesting applying this patch instead of the one Will
Deacon posted[1] and we need Will's second patch[2] on top, right?
[1] http://lkml.kernel.org/r/20230131221719.3176-3-will@kernel.org
[2] http://lkml.kernel.org/r/20230131221719.3176-3-will@kernel.org
Thanks.
--
tejun
Powered by blists - more mailing lists