[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201119110323.GA2432333@google.com>
Date: Thu, 19 Nov 2020 11:03:23 +0000
From: Quentin Perret <qperret@...gle.com>
To: Will Deacon <will@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
kernel-team@...roid.com
Subject: Re: [PATCH v3 07/14] sched: Introduce restrict_cpus_allowed_ptr() to
limit task CPU affinity
On Thursday 19 Nov 2020 at 09:18:20 (+0000), Quentin Perret wrote:
> Hey Will,
>
> On Friday 13 Nov 2020 at 09:37:12 (+0000), Will Deacon wrote:
> > -static int __set_cpus_allowed_ptr(struct task_struct *p,
> > - const struct cpumask *new_mask, bool check)
> > +static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
> > + const struct cpumask *new_mask,
> > + bool check,
> > + struct rq *rq,
> > + struct rq_flags *rf)
> > {
> > const struct cpumask *cpu_valid_mask = cpu_active_mask;
> > unsigned int dest_cpu;
> > - struct rq_flags rf;
> > - struct rq *rq;
> > int ret = 0;
>
> Should we have a lockdep assertion here?
>
> > - rq = task_rq_lock(p, &rf);
> > update_rq_clock(rq);
> >
> > if (p->flags & PF_KTHREAD) {
> > @@ -1929,7 +1923,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
> > if (task_running(rq, p) || p->state == TASK_WAKING) {
> > struct migration_arg arg = { p, dest_cpu };
> > /* Need help from migration thread: drop lock and wait. */
> > - task_rq_unlock(rq, p, &rf);
> > + task_rq_unlock(rq, p, rf);
> > stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
> > return 0;
> > } else if (task_on_rq_queued(p)) {
> > @@ -1937,20 +1931,69 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
> > * OK, since we're going to drop the lock immediately
> > * afterwards anyway.
> > */
> > - rq = move_queued_task(rq, &rf, p, dest_cpu);
> > + rq = move_queued_task(rq, rf, p, dest_cpu);
> > }
> > out:
> > - task_rq_unlock(rq, p, &rf);
> > + task_rq_unlock(rq, p, rf);
>
> And that's a little odd to have here no? Can we move it back on the
> caller's side?
Yeah, no, that obviously doesn't work for the stop_one_cpu() call above,
so feel free to ignore ...
Thanks,
Quentin
Powered by blists - more mailing lists