[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210524214348.GH15545@willie-the-truck>
Date: Mon, 24 May 2021 22:43:49 +0100
From: Will Deacon <will@...nel.org>
To: Qais Yousef <qais.yousef@....com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Suren Baghdasaryan <surenb@...gle.com>,
Quentin Perret <qperret@...gle.com>, Tejun Heo <tj@...nel.org>,
Li Zefan <lizefan@...wei.com>,
Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
"Rafael J. Wysocki" <rjw@...ysocki.net>, kernel-team@...roid.com
Subject: Re: [PATCH v6 12/21] sched: Allow task CPU affinity to be restricted
on asymmetric systems
On Fri, May 21, 2021 at 06:11:32PM +0100, Qais Yousef wrote:
> On 05/18/21 10:47, Will Deacon wrote:
> > +static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
> > + const struct cpumask *new_mask,
> > + u32 flags,
> > + struct rq *rq,
> > + struct rq_flags *rf)
> > + __releases(rq->lock)
> > + __releases(p->pi_lock)
> > {
> > const struct cpumask *cpu_valid_mask = cpu_active_mask;
> > const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
> > unsigned int dest_cpu;
> > - struct rq_flags rf;
> > - struct rq *rq;
> > int ret = 0;
> >
> > - rq = task_rq_lock(p, &rf);
> > update_rq_clock(rq);
> >
> > if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
> > @@ -2430,20 +2425,158 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
> >
> > __do_set_cpus_allowed(p, new_mask, flags);
> >
> > - return affine_move_task(rq, p, &rf, dest_cpu, flags);
> > + if (flags & SCA_USER)
> > + release_user_cpus_ptr(p);
>
> Why do we need to release the pointer here?
>
> Doesn't this mean if a 32bit task requests to change its affinity, then we'll
> lose this info and a subsequent execve() to a 64bit application means we won't
> be able to restore the original mask?
>
> ie:
>
> p0-64bit
> execve(32bit_app)
> // p1-32bit created
> p1-32bit.change_affinity()
> relase_user_cpus_ptr()
> execve(64bit_app) // lost info about p0 affinity?
>
> Hmm I think this helped me to get the answer. p1 changed its affinity, then
> there's nothing to be inherited by a new execve(), so yes we no longer need
> this info.
Yup, you got it.
> > +static int restrict_cpus_allowed_ptr(struct task_struct *p,
> > + struct cpumask *new_mask,
> > + const struct cpumask *subset_mask)
> > +{
> > + struct rq_flags rf;
> > + struct rq *rq;
> > + int err;
> > + struct cpumask *user_mask = NULL;
> > +
> > + if (!p->user_cpus_ptr)
> > + user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
> > +
> > + rq = task_rq_lock(p, &rf);
> > +
> > + /*
> > + * We're about to butcher the task affinity, so keep track of what
> > + * the user asked for in case we're able to restore it later on.
> > + */
> > + if (user_mask) {
> > + cpumask_copy(user_mask, p->cpus_ptr);
> > + p->user_cpus_ptr = user_mask;
> > + }
> > +
> > + /*
> > + * Forcefully restricting the affinity of a deadline task is
> > + * likely to cause problems, so fail and noisily override the
> > + * mask entirely.
> > + */
> > + if (task_has_dl_policy(p) && dl_bandwidth_enabled()) {
> > + err = -EPERM;
> > + goto err_unlock;
>
> free(user_mark) first?
>
> > + }
> > +
> > + if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) {
> > + err = -EINVAL;
> > + goto err_unlock;
>
> ditto
We free the mask when the task exits so we don't actually need to clean up
here. I left it like this on the assumption that failing here means that
it's very likely that either the task will exit or retry very soon.
However I agree that it would be clearer to free the thing anyway, so I'll
rejig the code to do that.
Will
Powered by blists - more mailing lists