lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 7 May 2024 11:42:04 +0100
From: Qais Yousef <qyousef@...alina.io>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Ingo Molnar <mingo@...nel.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Daniel Bristot de Oliveira <bristot@...hat.com>,
	Valentin Schneider <vschneid@...hat.com>,
	Christian Loehle <christian.loehle@....com>,
	linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: Consolidate cpufreq updates

On 05/07/24 10:02, Peter Zijlstra wrote:
> On Tue, May 07, 2024 at 01:56:59AM +0100, Qais Yousef wrote:
> 
> > Yes. How about this? Since stopper class appears as RT, we should still check
> > for this class specifically.
> 
> Much nicer!
> 
> > static inline void update_cpufreq_ctx_switch(struct rq *rq, struct task_struct *prev)
> > {
> > #ifdef CONFIG_CPU_FREQ
> > 	if (likely(fair_policy(current->policy))) {
> > 
> > 		if (unlikely(current->in_iowait)) {
> > 			cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT | SCHED_CPUFREQ_FORCE_UPDATE);
> > 			return;
> > 		}
> > 
> > #ifdef CONFIG_SMP
> > 		/*
> > 		 * Allow cpufreq updates once for every update_load_avg() decay.
> > 		 */
> > 		if (unlikely(rq->cfs.decayed)) {
> > 			rq->cfs.decayed = false;
> > 			cpufreq_update_util(rq, 0);
> > 			return;
> > 		}
> > #endif
> > 		return;
> > 	}
> > 
> > 	/*
> > 	 * RT and DL should always send a freq update. But we can do some
> > 	 * simple checks to avoid it when we know it's not necessary.
> > 	 */
> > 	if (task_is_realtime(current)) {
> > 		if (dl_task(current) && current->dl.flags & SCHED_FLAG_SUGOV) {
> > 			/* Ignore sugov kthreads, they're responding to our requests */
> > 			return;
> > 		}
> > 
> > 		if (rt_task(current) && rt_task(prev)) {
> 
> doesn't task_is_realtime() impy rt_task() ?
> 
> Also, this clause still includes DL tasks, is that okay?

Ugh, yes. The earlier check for dl_task() is not good enough. I should send
a patch to fix the definition of rt_task()!

I think at this stage open coding the policy check with a switch statement
is the best thing to do

static inline void update_cpufreq_ctx_switch(struct rq *rq, struct task_struct *prev)
{
#ifdef CONFIG_CPU_FREQ
	/*
	 * RT and DL should always send a freq update. But we can do some
	 * simple checks to avoid it when we know it's not necessary.
	 *
	 * iowait_boost will always trigger a freq update too.
	 *
	 * Fair tasks will only trigger an update if the root cfs_rq has
	 * decayed.
	 *
	 * Everything else should do nothing.
	 */
	switch (current->policy) {
	case SCHED_NORMAL:
	case SCHED_BATCH:
		if (unlikely(current->in_iowait)) {
			cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT | SCHED_CPUFREQ_FORCE_UPDATE);
			return;
		}

#ifdef CONFIG_SMP
		if (unlikely(rq->cfs.decayed)) {
			rq->cfs.decayed = false;
			cpufreq_update_util(rq, 0);
			return;
		}
#endif
		return;
	case SCHED_FIFO:
	case SCHED_RR:
		if (rt_policy(prev)) {
#ifdef CONFIG_UCLAMP_TASK
			unsigned long curr_uclamp_min = uclamp_eff_value(current, UCLAMP_MIN);
			unsigned long prev_uclamp_min = uclamp_eff_value(prev, UCLAMP_MIN);

			if (curr_uclamp_min == prev_uclamp_min)
#endif
				return;
		}
#ifdef CONFIG_SMP
		/* Stopper task masquerades as RT */
		if (unlikely(current->sched_class == &stop_sched_class))
			return;
#endif
		cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
		return;
	case SCHED_DEADLINE:
		if (current->dl.flags & SCHED_FLAG_SUGOV) {
			/* Ignore sugov kthreads, they're responding to our requests */
			return;
		}
		cpufreq_update_util(rq, SCHED_CPUFREQ_FORCE_UPDATE);
		return;
	default:
		return;
	}
#endif
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ