[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <826fbcc9-8bd5-4598-ae5d-d65092823b7c@arm.com>
Date: Thu, 12 Sep 2024 17:58:53 +0100
From: Christian Loehle <christian.loehle@....com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
linux-pm <linux-pm@...r.kernel.org>, "Rafael J. Wysocki" <rafael@...nel.org>
Cc: Qais Yousef <qyousef@...alina.io>, Juri Lelli <juri.lelli@...hat.com>,
Ingo Molnar <mingo@...hat.com>, Viresh Kumar <viresh.kumar@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Pierre Gondois <pierre.gondois@....com>
Subject: Re: [PATCH] cpufreq/schedutil: Only bind threads if needed
On 9/12/24 14:53, Christian Loehle wrote:
> Remove the unconditional binding of sugov kthreads to the affected CPUs
> if the cpufreq driver indicates that updates can happen from any CPU.
> This allows userspace to set affinities to either save power (waking up
> bigger CPUs on HMP can be expensive) or increasing performance (by
> letting the utilized CPUs run without preemption of the sugov kthread).
>
> Without this patch the behavior of sugov threads will basically be a
> boot-time dice roll on which CPU of the PD has to handle all the
> cpufreq updates. With the recent decreases of update filtering these
> two basic problems become more and more apparent:
> 1. The wake_cpu might be idle and we are waking it up from another
> CPU just for the cpufreq update. Apart from wasting power, the exit
> latency of it's idle state might be longer than the sugov threads
> running time, essentially delaying the cpufreq update unnecessarily.
> 2. We are preempting either the requesting or another busy CPU of the
> PD, while the update could be done from a CPU that we deem less
> important and pay the price of an IPI and two context-switches.
>
> The change is essentially not setting PF_NO_SETAFFINITY on
> dvfs_possible_from_any_cpu, no behavior change if userspace doesn't
> touch affinities.
>
> Signed-off-by: Christian Loehle <christian.loehle@....com>
I'll add some numbers to illustrate, although the example might not be
particularly realistic.
The classic fio workload will trigger cpufreq update very often, so
I used that on the Pixel6, affinity set to CPU7 (bitmask 80)(Big PD is
[6,7]).
Without this patch we have either all sugov enqueues on CPU6 or CPU7,
depending on where the first CPU frequency request (since boot) was
issued from (the deadline select_task_rq is rather simple, so it
will just wake_cpu if that is still valid, which here it always is).
I set different affinities for the sugov:6 worker and annotate
IOPS (throughput) and power (mW average), the test is for 30s each.
cpumask IOPS
80 7477
888.3 mW {'CPU-Big': 742.5668631414397, 'CPU-Mid': 11.919500003712095, 'CPU-Little': 133.79554163073317}
40 7378
942.8 mW {'CPU-Big': 797.4037245094225, 'CPU-Mid': 12.440698878099667, 'CPU-Little': 132.98899390286172}
f 7469
873.7 mW {'CPU-Big': 718.2933574826287, 'CPU-Mid': 11.89176754939742, 'CPU-Little': 143.55634276873246}
2 7501
872.8 mW {'CPU-Big': 718.6036614909397, 'CPU-Mid': 11.711731623773632, 'CPU-Little': 142.4830173663132}
1 7392
859.5 mW {'CPU-Big': 704.926078017567, 'CPU-Mid': 12.196892652625284, 'CPU-Little': 142.4231989686622}
Throughput is somewhat comparable for all, anyway the frequency only
bounces from capacity 1024 to 512 because of the instability of
iowait boost.
For 40 (CPU6) we see significantly more power usage, as the sugov
will prevent CPU6 from power down, as it gets woken up by the sugov
worker often.
f,2 and 1 affinities have slightly higher power coming from the
littles (as expected), but significantly less power from the bigs,
since they don't have the double context-switch on preemption and
the actual sugov CPU cycles.
1 has by far the least power, as it's always in WFI anyway, it
handles the IO interrupts from fio on CPU7.
Granted that is a somewhat worst-case scenario just to illustrate
the problem.
I also have a patch for preferring wake_cpu = smp_processor_id() for
the sugov worker, which is somewhat adjacent to this, the numbers
above make a case for that even without touching affinities.
I thought this might be less controversial for now.
Powered by blists - more mailing lists