[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240515104758.mogzntczla6xar6o@airbuntu>
Date: Wed, 15 May 2024 11:47:58 +0100
From: Qais Yousef <qyousef@...alina.io>
To: "Rafael J. Wysocki" <rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Christian Loehle <christian.loehle@....com>,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
John Stultz <jstultz@...gle.com>
Subject: Re: [PATCH v3] sched: Consolidate cpufreq updates
On 05/12/24 20:00, Qais Yousef wrote:
> +static __always_inline void
> +update_cpufreq_ctx_switch(struct rq *rq, struct task_struct *prev)
> +{
> +#ifdef CONFIG_CPU_FREQ
> + /*
> + * RT and DL should always send a freq update. But we can do some
> + * simple checks to avoid it when we know it's not necessary.
> + *
> + * iowait_boost will always trigger a freq update too.
> + *
> + * Fair tasks will only trigger an update if the root cfs_rq has
> + * decayed.
> + *
> + * Everything else should do nothing.
> + */
> + switch (current->policy) {
I just realized policy check will ignore PI-boosted tasks. But since we don't
have performance inheritance in rt_mutex() yet (have out-of-tree patches if
there's appetite for this), I don't think it will matter here as the decision
wouldn't change.
Once Proxy Execution lands, I think this should work as intended once we use
the correct wrapper to check the current scheduling context.
> + case SCHED_NORMAL:
> + case SCHED_BATCH:
> + if (unlikely(current->in_iowait)) {
> + cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT | SCHED_CPUFREQ_FORCE_UPDATE);
> + return;
> + }
> +
> +#ifdef CONFIG_SMP
> + if (unlikely(rq->cfs.decayed)) {
> + rq->cfs.decayed = false;
> + cpufreq_update_util(rq, 0);
> + return;
> + }
> +#endif
> + return;
> + case SCHED_FIFO:
> + case SCHED_RR:
> + if (prev && rt_policy(prev->policy)) {
> +#ifdef CONFIG_UCLAMP_TASK
> + unsigned long curr_uclamp_min = uclamp_eff_value(current, UCLAMP_MIN);
> + unsigned long prev_uclamp_min = uclamp_eff_value(prev, UCLAMP_MIN);
> +
> + if (curr_uclamp_min == prev_uclamp_min)
> +#endif
> + return;
> + }
> +#ifdef CONFIG_SMP
> + /* Stopper task masquerades as RT */
> + if (unlikely(current->sched_class == &stop_sched_class))
> + return;
> +#endif
> + cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
> + return;
> + case SCHED_DEADLINE:
> + if (current->dl.flags & SCHED_FLAG_SUGOV) {
> + /* Ignore sugov kthreads, they're responding to our requests */
> + return;
> + }
> + cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
> + return;
> + default:
> + return;
> + }
> +#endif
> +}
> +
Powered by blists - more mailing lists