[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <315f8c55-9368-4f2a-81ee-2d7dcb05bc14@arm.com>
Date: Mon, 29 Jul 2024 17:01:53 +0100
From: Metin Kaya <metin.kaya@....com>
To: Qais Yousef <qyousef@...alina.io>, "Rafael J. Wysocki"
<rafael@...nel.org>, Viresh Kumar <viresh.kumar@...aro.org>,
Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Juri Lelli <juri.lelli@...hat.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Dietmar Eggemann <dietmar.eggemann@....com>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>,
Christian Loehle <christian.loehle@....com>,
Hongyan Xia <hongyan.xia2@....com>, John Stultz <jstultz@...gle.com>,
linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7] sched: Consolidate cpufreq updates
On 28/07/2024 7:45 pm, Qais Yousef wrote:
> Improve the interaction with cpufreq governors by making the
> cpufreq_update_util() calls more intentional.
[snip]
> We also ensure to ignore cpufreq udpates for sugov workers at context
Nit: s/udpates/updates/
> switch if it was prev task.
[snip]
>
> +static __always_inline void
> +__update_cpufreq_ctx_switch(struct rq *rq, struct task_struct *prev)
> +{
> +#ifdef CONFIG_CPU_FREQ
> + if (prev && prev->dl.flags & SCHED_FLAG_SUGOV) {
> + /* Sugov just did an update, don't be too aggressive */
> + return;
> + }
> +
> + /*
> + * RT and DL should always send a freq update. But we can do some
> + * simple checks to avoid it when we know it's not necessary.
> + *
> + * iowait_boost will always trigger a freq update too.
> + *
> + * Fair tasks will only trigger an update if the root cfs_rq has
> + * decayed.
> + *
> + * Everything else should do nothing.
> + */
> + switch (current->policy) {
> + case SCHED_NORMAL:
> + case SCHED_BATCH:
> + case SCHED_IDLE:
> + if (unlikely(current->in_iowait)) {
> + cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT | SCHED_CPUFREQ_FORCE_UPDATE);
> + return;
> + }
> +
> +#ifdef CONFIG_SMP
> + /*
> + * Send an update if we switched from RT or DL as they tend to
> + * boost the CPU and we are likely able to reduce the freq now.
> + */
> + rq->cfs.decayed |= prev && (rt_policy(prev->policy) || dl_policy(prev->policy));
> +
> + if (unlikely(rq->cfs.decayed)) {
> + rq->cfs.decayed = false;
> + cpufreq_update_util(rq, 0);
> + return;
> + }
> +#else
> + cpufreq_update_util(rq, 0);
> +#endif
> + return; /* ! */
> + case SCHED_FIFO:
> + case SCHED_RR:
> + if (prev && rt_policy(prev->policy)) {
> +#ifdef CONFIG_UCLAMP_TASK
> + unsigned long curr_uclamp_min = uclamp_eff_value(current, UCLAMP_MIN);
> + unsigned long prev_uclamp_min = uclamp_eff_value(prev, UCLAMP_MIN);
> +
> + if (curr_uclamp_min == prev_uclamp_min)
> +#endif
> + return;
> + }
> +#ifdef CONFIG_SMP
> + /* Stopper task masquerades as RT */
> + if (unlikely(current->sched_class == &stop_sched_class))
> + return;
> +#endif
> + cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
> + return; /* ! */
> + case SCHED_DEADLINE:
> + /*
> + * This is handled at enqueue to avoid breaking DL bandwidth
> + * rules when multiple DL tasks are running on the same CPU.
> + * Deferring till context switch here could mean the bandwidth
> + * calculations would be broken to ensure all the DL tasks meet
> + * their deadlines.
> + */
> + return; /* ! */
> + default:
> + return; /* ! */
> + }
Nit: would it be more conventional to replace marked `return` statements
above with `break`s?
> +#endif
> +}
> +
> +/*
> + * Call when currently running task had an attribute change that requires
> + * an immediate cpufreq update.
> + */
> +void update_cpufreq_current(struct rq *rq)
> +{
> + __update_cpufreq_ctx_switch(rq, NULL);
> +}
> +
[snip]
Powered by blists - more mailing lists