[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhja6vdwpqc.mognet@arm.com>
Date: Thu, 19 Nov 2020 11:27:55 +0000
From: Valentin Schneider <valentin.schneider@....com>
To: Will Deacon <will@...nel.org>
Cc: Quentin Perret <qperret@...gle.com>,
linux-arm-kernel@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Qais Yousef <qais.yousef@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
kernel-team@...roid.com
Subject: Re: [PATCH v3 07/14] sched: Introduce restrict_cpus_allowed_ptr() to limit task CPU affinity
On 19/11/20 11:05, Will Deacon wrote:
> On Thu, Nov 19, 2020 at 09:18:20AM +0000, Quentin Perret wrote:
>> > @@ -1937,20 +1931,69 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
>> > * OK, since we're going to drop the lock immediately
>> > * afterwards anyway.
>> > */
>> > - rq = move_queued_task(rq, &rf, p, dest_cpu);
>> > + rq = move_queued_task(rq, rf, p, dest_cpu);
>> > }
>> > out:
>> > - task_rq_unlock(rq, p, &rf);
>> > + task_rq_unlock(rq, p, rf);
>>
>> And that's a little odd to have here no? Can we move it back on the
>> caller's side?
>
> I don't think so, unfortunately. __set_cpus_allowed_ptr_locked() can trigger
> migration, so it can drop the rq lock as part of that and end up relocking a
> new rq, which it also unlocks before returning. Doing the unlock in the
> caller is therfore even weirder, because you'd have to return the lock
> pointer or something horrible like that.
>
> I did add a comment about this right before the function and it's an
> internal function to the scheduler so I think it's ok.
>
An alternative here would be to add a new SCA_RESTRICT flag for
__set_cpus_allowed_ptr() (see migrate_disable() faff in
tip/sched/core). Not fond of either approaches, but the flag thing would
avoid this "quirk".
> Will
Powered by blists - more mailing lists