lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240513220903.no2j6zl4tk7lr6um@airbuntu>
Date: Mon, 13 May 2024 23:09:03 +0100
From: Qais Yousef <qyousef@...alina.io>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Daniel Bristot de Oliveira <bristot@...hat.com>,
	Valentin Schneider <vschneid@...hat.com>,
	Christian Loehle <christian.loehle@....com>,
	linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched: Consolidate cpufreq updates

On 05/13/24 14:43, Dietmar Eggemann wrote:
> On 12/05/2024 21:00, Qais Yousef wrote:
> 
> [...]
> 
> > @@ -4682,7 +4659,7 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
> >  
> >  	add_tg_cfs_propagate(cfs_rq, se->avg.load_sum);
> >  
> > -	cfs_rq_util_change(cfs_rq, 0);
> > +	cpufreq_update_util(rq_of(cfs_rq), 0);
> 
> Isn't this slighlty different now?
> 
> before:
> 
>    if (&rq->cfs == cfs_rq) {
>        cpufreq_update_util(rq, ....)
>    }
> 
> now:
> 
>    cpufreq_update_util(rq_of(cfs_rq), ...)
> 
> You should get way more updates from attach/detach now.

Yes, well spotted!

Looking at the path more closely, I can see this is called from
enqueue_task_fair() path when a task migrates to new CPU. And when
attach_task_cfs_rq() which is called when we switch_to_fair(), which I already
cover in the policy change for the RUNNING task, or when
task_change_group_fair() which what I originally understood Vincent was
referring to. I moved the update to this function after the detach/attach
operations with better guards to avoid unnecessary update.

I understood this will lead to big change and better apply immediately vs
wait for the next context switch. But I'll ask the question again, can we drop
this and defer to context switch?

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 43f6244ab0f9..e791969360d1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4659,8 +4659,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s

        add_tg_cfs_propagate(cfs_rq, se->avg.load_sum);

-       cpufreq_update_util(rq_of(cfs_rq), 0);
-
        trace_pelt_cfs_tp(cfs_rq);
 }

@@ -4689,8 +4687,6 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s

        add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);

-       cpufreq_update_util(rq_of(cfs_rq), 0);
-
        trace_pelt_cfs_tp(cfs_rq);
 }

@@ -12856,6 +12852,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
 #ifdef CONFIG_FAIR_GROUP_SCHED
 static void task_change_group_fair(struct task_struct *p)
 {
+       struct rq *rq = task_rq(p);
        /*
         * We couldn't detach or attach a forked task which
         * hasn't been woken up by wake_up_new_task().
@@ -12871,6 +12868,10 @@ static void task_change_group_fair(struct task_struct *p)
 #endif
        set_task_rq(p, task_cpu(p));
        attach_task_cfs_rq(p);
+       if (task_on_rq_queued(p) && rq->cfs.decayed) {
+               rq->cfs.decayed = false;
+               cpufreq_update_util(rq, 0);
+       }
 }

 void free_fair_sched_group(struct task_group *tg)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ