lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160222153132.GI6357@twins.programming.kicks-ass.net>
Date:	Mon, 22 Feb 2016 16:31:32 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vincent Guittot <vincent.guittot@...aro.org>
Cc:	Steve Muckle <steve.muckle@...aro.org>,
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
	Juri Lelli <juri.lelli@....com>,
	"Rafael J. Wysocki" <rafael@...nel.org>,
	Linux PM list <linux-pm@...r.kernel.org>,
	Ingo Molnar <mingo@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	"Rafael J. Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH v10 1/3] cpufreq: Add mechanism for registering
 utilization update callbacks

On Mon, Feb 22, 2016 at 03:33:02PM +0100, Vincent Guittot wrote:

> > Right, so I had a wee look at that again, and had a quick chat with Juri
> > on IRC. So the main difference seems to be that you guys want to know
> > why the utilization changed, as opposed to purely _that_ it changed.
> 
> Yes, the main goal was to be able to filter the useful and useless
> update of rq's utilization in order to minimize/optimize the trig of
> an update of the frequency. These patches have been made for a cpufreq
> driver that reacts far slower than scheduler. It's might worth
> starting with a simple solution and update it after

Right, always start simple :-)

> > And hence you have callbacks all over the place.
> >
> > I'm not too sure I really like that too much, it bloats the code and
> > somewhat obfuscates the point.
> >
> > So I would really like there to be just the one callback when we
> > actually compute a new number, and that is update_load_avg().
> >
> > Now I think we can 'easily' propagate the information you want into
> > update_load_avg() (see below), but I would like to see actual arguments
> > for why you would need this.
> 
> Your proposal is interesting except that we are interested in the rq's
> utilization more that se's ones so we should better use
> update_cfs_rq_load_avg and few additional place like
> attach_entity_load_avg which bypasses update_cfs_rq_load_avg to update
> rq's utilization and load

Ah, so the intent was to use the rq->cfs util, but I might have gotten
a little lost in the load update code (I always get confused by that
code if I haven't looked at it for a while).

We can put the hook in update_cfs_rq_load_avg(), that shouldn't be a
problem.

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2859,29 +2859,6 @@ static inline int update_cfs_rq_load_avg
 	cfs_rq->load_last_update_time_copy = sa->last_update_time;
 #endif
 
-	return decayed || removed;
-}
-
-/* Update task and its cfs_rq load average */
-static inline void update_load_avg(struct sched_entity *se, int update_tg,
-				   enum load_update_type type)
-{
-	struct cfs_rq *cfs_rq = cfs_rq_of(se);
-	u64 now = cfs_rq_clock_task(cfs_rq);
-	struct rq *rq = rq_of(cfs_rq);
-	int cpu = cpu_of(rq);
-
-	/*
-	 * Track task load average for carrying it to new CPU after migrated, and
-	 * track group sched_entity load average for task_h_load calc in migration
-	 */
-	__update_load_avg(now, cpu, &se->avg,
-			  se->on_rq * scale_load_down(se->load.weight),
-			  cfs_rq->curr == se, NULL);
-
-	if (update_cfs_rq_load_avg(now, cfs_rq) && update_tg)
-		update_tg_load_avg(cfs_rq, 0);
-
 	if (cpu == smp_processor_id() && &rq->cfs == cfs_rq) {
 		unsigned long max = rq->cpu_capacity_orig;
 
@@ -2904,6 +2881,29 @@ static inline void update_load_avg(struc
 		cpufreq_update_util(rq_clock(rq),
 				    min(cfs_rq->avg.util_avg, max), max);
 	}
+
+	return decayed || removed;
+}
+
+/* Update task and its cfs_rq load average */
+static inline void update_load_avg(struct sched_entity *se, int update_tg,
+				   enum load_update_type type)
+{
+	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+	u64 now = cfs_rq_clock_task(cfs_rq);
+	struct rq *rq = rq_of(cfs_rq);
+	int cpu = cpu_of(rq);
+
+	/*
+	 * Track task load average for carrying it to new CPU after migrated, and
+	 * track group sched_entity load average for task_h_load calc in migration
+	 */
+	__update_load_avg(now, cpu, &se->avg,
+			  se->on_rq * scale_load_down(se->load.weight),
+			  cfs_rq->curr == se, NULL);
+
+	if (update_cfs_rq_load_avg(now, cfs_rq) && update_tg)
+		update_tg_load_avg(cfs_rq, 0);
 }
 
 static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ