[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyB7fNvxyKwnMgWicvZN7oTnGYLBNH8cUjLg2EcKQ4YMMg@mail.gmail.com>
Date: Sat, 16 Jan 2021 22:39:03 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Valentin Schneider <valentin.schneider@....com>,
Qian Cai <cai@...hat.com>,
Vincent Donnefort <vincent.donnefort@....com>,
Dexuan Cui <decui@...rosoft.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH 8/8] sched: Relax the set_cpus_allowed_ptr() semantics
On Sat, Jan 16, 2021 at 7:43 PM Peter Zijlstra <peterz@...radead.org> wrote:
>
> Now that we have KTHREAD_IS_PER_CPU to denote the critical per-cpu
> tasks to retain during CPU offline, we can relax the warning in
> set_cpus_allowed_ptr(). Any spurious kthread that wants to get on at
> the last minute will get pushed off before it can run.
>
> While during CPU online there is no harm, and actual benefit, to
> allowing kthreads back on early, it simplifies hotplug code.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Thanks!
Relaxing set_cpus_allowed_ptr() was also one of the choices I listed,
which can really simplify hotplug code in the workqueue and may be
other hotplug code.
Reviewed-by: Lai jiangshan <jiangshanlai@...il.com>
> ---
> kernel/sched/core.c | 20 +++++++++-----------
> 1 file changed, 9 insertions(+), 11 deletions(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2342,7 +2342,9 @@ static int __set_cpus_allowed_ptr(struct
>
> if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
> /*
> - * Kernel threads are allowed on online && !active CPUs.
> + * Kernel threads are allowed on online && !active CPUs,
> + * however, during cpu-hot-unplug, even these might get pushed
> + * away if not KTHREAD_IS_PER_CPU.
> *
> * Specifically, migration_disabled() tasks must not fail the
> * cpumask_any_and_distribute() pick below, esp. so on
> @@ -2386,16 +2388,6 @@ static int __set_cpus_allowed_ptr(struct
>
> __do_set_cpus_allowed(p, new_mask, flags);
>
> - if (p->flags & PF_KTHREAD) {
> - /*
> - * For kernel threads that do indeed end up on online &&
> - * !active we want to ensure they are strict per-CPU threads.
> - */
> - WARN_ON(cpumask_intersects(new_mask, cpu_online_mask) &&
> - !cpumask_intersects(new_mask, cpu_active_mask) &&
> - p->nr_cpus_allowed != 1);
> - }
> -
> return affine_move_task(rq, p, &rf, dest_cpu, flags);
>
> out:
> @@ -7519,6 +7511,12 @@ int sched_cpu_deactivate(unsigned int cp
> */
> synchronize_rcu();
>
> + /*
> + * From this point forward, this CPU will refuse to run any task that
> + * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
> + * push those tasks away until this gets cleared, see
> + * sched_cpu_dying().
> + */
> balance_push_set(cpu, true);
>
> rq_lock_irqsave(rq, &rf);
>
>
Powered by blists - more mailing lists