[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240105012153.zawr4pyd4dbrk4sf@airbuntu>
Date: Fri, 5 Jan 2024 01:21:53 +0000
From: Qais Yousef <qyousef@...alina.io>
To: Pierre Gondois <pierre.gondois@....com>
Cc: Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
linux-kernel@...r.kernel.org, Lukasz Luba <lukasz.luba@....com>,
Wei Wang <wvw@...gle.com>, Rick Yiu <rickyiu@...gle.com>,
Chung-Kai Mei <chungkai@...gle.com>
Subject: Re: [PATCH RFC 3/3] sched/fair: Implement new type of misfit
MISFIT_POWER
Hi Pierre
On 01/04/24 15:28, Pierre Gondois wrote:
> Hello Qais,
>
> I tried to do as you indicated at:
> https://lore.kernel.org/all/20231228233848.piyodw2s2ytli37a@airbuntu/
> without success. I can see that the task is migrated from a big CPU to
> smaller CPUs, but it doesn't seem to be related to the new MISFIT_POWER
> feature.
Hmmm. It is possible something went wrong while preparing these set of patches.
I do remember trying this patch quickly, but judging by the bug you found
I might have missed doing a recent run after a set of changes. So something
could have gotten broken.
Let me retry it and see what's going on.
> Indeed, if the uclamp_max value of a CPU-bound task is set to 0, isn't it
> normal have EAS/feec() migrating the task to smaller CPUs ? I added tracing
> inside is_misfit_task() and load_balance()'s misfit path and could not see
> this path being used.
I did have similar debug messages and I could see them triggered. To be honest
I spent most of the time working on this in the past against 5.10 and 5.15
kernels. And when I started the forward port I already was working on removal
max aggregation and this whole patch needed to be rewritten so I kept it as
a guideline. My focus was on getting the misfit generalization done (patch
1 and 2) and demonstrate how this can be potentially used to implement better
logic to balance based on power.
The main ideas are:
1. We need to detect the MISFIT_POWER.
2. We need to force every CPU to try to pull.
3. We need to use feec() to decide which CPU to pull.
I'm not sure if there's potentially another better way. So I was hoping to see
if there are other PoVs to consider.
>
> On 12/9/23 02:17, Qais Yousef wrote:
> > MISFIT_POWER requires moving the task to a more efficient CPU.
> >
> > This can happen when a big task is capped by uclamp_max, but another
> > task wakes up on this CPU that can lift the capping, in this case we
> > need to migrate it to another, likely smaller, CPU to save power.
>
> Just to be sure, are we talking about the following path, where sugov
> decides which OPP to select ?
> sugov_get_util()
> \-effective_cpu_util()
> \-uclamp_rq_util_with()
>
> To try to describe the issue in my own words, IIUC, the issue comes from
> the fact that during energy estimations in feec(), we don't estimate the
> impact of enqueuing a task on the rq's UCLAMP_MAX value. So a rq with a
> little UCLAMP_MAX value might see the value grows if an uncapped task
> is enqueued, leading to raising the frequency and consuming more
> power.
> Thus, this patch tries to detect such scenario and migrate the clamped
> tasks.
Yes, to a big degree. See below.
> Maybe another approach would be to estimate the impact of enqueuing a
> task on the rq's UCLAMP_MAX value ?
I'd like to think we'll remove rq uclamp value altogether, hopefully.
Generally I'd welcome ideas on what kind of MISFIT_POWER scenarios we have.
With uclamp_max it is the fact that these tasks can be busy loops (their
util_avg is too high) and will cause anything else RUNNING to run at max
frequency regardless of their util_avg. UCLAMP_MAX tells us we have opportunity
to move them somewhere less expensive.
Detection logic will be harder without rq uclamp_max.
My first ever detection logic was actually to check if the task is running on
the smallest fitting CPU; if not then move it to that. Then I switched to
detection based on whether it is capped or not with feec() deciding which is
the best next place to go to.
There's another problem is that these tasks when they end up on a big core,
they'll make the CPU look busy and can prevent other tasks from running
'comfortably' along side it. So not only they waste power, but they're getting
in the way of other work to get their work done with less interference. I'm not
sure if this should be treated as a different type of misfit though.
We need to get MISFIT_POWER in first and then see if the interference issue is
not automatically resolved then. From power perspective, a busy loop but capped
to be able to run on a mid or little, it is wrong to keep it on the big for
extended period of time from power perspective. And by product the interference
issue should be resolved, in theory at least.
Also not sure if we can have non UCLAMP_MAX based MISFIT_POWER. I couldn't come
up with a scenario yet, but I don't think we need to restrict ourselves to
UCLAMP_MAX only ones. So ideas are welcome :-)
Thanks!
--
Qais Yousef
Powered by blists - more mailing lists