lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231217214455.5rf67ezdrwqdvwwh@airbuntu>
Date: Sun, 17 Dec 2023 21:44:55 +0000
From: Qais Yousef <qyousef@...alina.io>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
	"Rafael J. Wysocki" <rafael@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
	Lukasz Luba <lukasz.luba@....com>, Wei Wang <wvw@...gle.com>,
	Rick Yiu <rickyiu@...gle.com>, Chung-Kai Mei <chungkai@...gle.com>,
	Hongyan Xia <hongyan.xia2@....com>
Subject: Re: [PATCH 1/4] sched/fair: Be less aggressive in calling
 cpufreq_update_util()

On 12/18/23 09:51, Dietmar Eggemann wrote:
> On 08/12/2023 02:52, Qais Yousef wrote:
> > Due to the way code is structured, it makes a lot of sense to trigger
> > cpufreq_update_util() from update_load_avg(). But this is too aggressive
> > as in most cases we are iterating through entities in a loop to
> > update_load_avg() in the hierarchy. So we end up sending too many
> > request in an loop as we're updating the hierarchy.
> > 
> > Combine this with the rate limit in schedutil, we could end up
> > prematurely send up a wrong frequency update before we have actually
> > updated all entities appropriately.
> > 
> > Be smarter about it by limiting the trigger to perform frequency updates
> > after all accounting logic has done. This ended up being in the
> 
> What are the boundaries of the 'accounting logic' here? Is this related
> to the update of all sched_entities and cfs_rq's involved when a task is
> attached/detached (or enqueued/dequeued)?

Yes.

> 
> I can't see that there are any premature cfs_rq_util_change() in the
> current code when we consider this.

Thanks for checking. I'll revisit the problem as indicated previously. This
patch is still needed; I'll update rationale at least and fix highlighted
issues with decay.

> 
> And avoiding updates for a smaller task to make sure updates for a
> bigger task go through is IMHO not feasible.

Where did this line of thought come from? This patch is about consolidating
how scheduler request frequency updates. And later patches requires the single
update at tick to pass the new SCHED_CPUFREQ_PERF_HINTS.

If you're referring to the logic in later patches about ignore_short_tasks();
then we only ignore the performance hints for this task.

Why not feasible? What's the rationale?

> 
> I wonder how much influence does this patch has on the test results
> presented the patch header?

The only change of behavior is how we deal with decay. Which I thought wouldn't
introduce a functional change, but as caught to by Christian, it did. No
functional changes are supposed to happen that can affect the test results
AFAICT.


Cheers

--
Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ