[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190121153308.GL27931@hirez.programming.kicks-ass.net>
Date: Mon, 21 Jan 2019 16:33:08 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Patrick Bellasi <patrick.bellasi@....com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
linux-api@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Tejun Heo <tj@...nel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Paul Turner <pjt@...gle.com>,
Quentin Perret <quentin.perret@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [PATCH v6 05/16] sched/core: uclamp: Update CPU's refcount on
clamp changes
On Tue, Jan 15, 2019 at 10:15:02AM +0000, Patrick Bellasi wrote:
> +static inline void
> +uclamp_task_update_active(struct task_struct *p, unsigned int clamp_id)
> +{
> + struct rq_flags rf;
> + struct rq *rq;
> +
> + /*
> + * Lock the task and the CPU where the task is (or was) queued.
> + *
> + * We might lock the (previous) rq of a !RUNNABLE task, but that's the
> + * price to pay to safely serialize util_{min,max} updates with
> + * enqueues, dequeues and migration operations.
> + * This is the same locking schema used by __set_cpus_allowed_ptr().
> + */
> + rq = task_rq_lock(p, &rf);
> +
> + /*
> + * Setting the clamp bucket is serialized by task_rq_lock().
> + * If the task is not yet RUNNABLE and its task_struct is not
> + * affecting a valid clamp bucket, the next time it's enqueued,
> + * it will already see the updated clamp bucket value.
> + */
> + if (!p->uclamp[clamp_id].active)
> + goto done;
> +
> + uclamp_cpu_dec_id(p, rq, clamp_id);
> + uclamp_cpu_inc_id(p, rq, clamp_id);
> +
> +done:
> + task_rq_unlock(rq, p, &rf);
> +}
> @@ -1008,11 +1043,11 @@ static int __setscheduler_uclamp(struct task_struct *p,
>
> mutex_lock(&uclamp_mutex);
> if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MIN) {
> - uclamp_bucket_inc(&p->uclamp[UCLAMP_MIN],
> + uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MIN],
> UCLAMP_MIN, lower_bound);
> }
> if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP_MAX) {
> - uclamp_bucket_inc(&p->uclamp[UCLAMP_MAX],
> + uclamp_bucket_inc(p, &p->uclamp[UCLAMP_MAX],
> UCLAMP_MAX, upper_bound);
> }
> mutex_unlock(&uclamp_mutex);
But.... __sched_setscheduler() actually does the whole dequeue + enqueue
thing already ?!? See where it does __setscheduler().
Powered by blists - more mailing lists