[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YK4/6+lX4WJxcLDw@hirez.programming.kicks-ass.net>
Date: Wed, 26 May 2021 14:32:43 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <valentin.schneider@....com>
Cc: Will Deacon <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Quentin Perret <qperret@...gle.com>, Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
kernel-team@...roid.com
Subject: Re: [PATCH v7 01/22] sched: Favour predetermined active CPU as
migration destination
On Wed, May 26, 2021 at 12:14:20PM +0100, Valentin Schneider wrote:
> On 25/05/21 16:14, Will Deacon wrote:
> > @@ -1956,12 +1958,8 @@ static int migration_cpu_stop(void *data)
> > complete = true;
> > }
> >
> > - if (dest_cpu < 0) {
> > - if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
> > - goto out;
> > -
> > - dest_cpu = cpumask_any_distribute(&p->cpus_mask);
> > - }
> > + if (dest_mask && (cpumask_test_cpu(task_cpu(p), dest_mask)))
> > + goto out;
> >
>
> IIRC the reason we deferred the pick to migration_cpu_stop() was because of
> those insane races involving multiple SCA calls the likes of:
>
> p->cpus_mask = [0, 1]; p on CPU0
>
> CPUx CPUy CPU0
>
> SCA(p, [2])
> __do_set_cpus_allowed();
> queue migration_cpu_stop()
> SCA(p, [3])
> __do_set_cpus_allowed();
> migration_cpu_stop()
>
> The stopper needs to use the latest cpumask set by the second SCA despite
> having an arg->pending set up by the first SCA. Doesn't this break here?
Yep.
> I'm not sure I've paged back in all of the subtleties laying in ambush
> here, but what about the below?
>
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 5226cc26a095..cd447c9db61d 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1954,19 +1953,15 @@ static int migration_cpu_stop(void *data)
> if (pending) {
> p->migration_pending = NULL;
> complete = true;
>
> if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
> goto out;
> }
>
> if (task_on_rq_queued(p))
> + rq = __migrate_task(rq, &rf, p, arg->dest_cpu);
> @@ -2249,7 +2244,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
> init_completion(&my_pending.done);
> my_pending.arg = (struct migration_arg) {
> .task = p,
> + .dest_cpu = dest_cpu,
> .pending = &my_pending,
> };
>
> @@ -2257,6 +2252,7 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
> } else {
> pending = p->migration_pending;
> refcount_inc(&pending->refs);
> + pending->arg.dest_cpu = dest_cpu;
> }
> }
Argh.. that might just work. But I'm thinking we wants comments this
time around :-) This is even more subtle.
Powered by blists - more mailing lists