[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc54a6ab-2529-4def-ae7d-6a217e3bc1bc@amd.com>
Date: Fri, 4 Jul 2025 10:04:13 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Benjamin Segall <bsegall@...gle.com>, Aaron Lu <ziqianlu@...edance.com>
Cc: Valentin Schneider <vschneid@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Chengming Zhou <chengming.zhou@...ux.dev>, Josh Don <joshdon@...gle.com>,
Ingo Molnar <mingo@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Xi Wang <xii@...gle.com>,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chuyi Zhou <zhouchuyi@...edance.com>, Jan Kiszka <jan.kiszka@...mens.com>,
Florian Bezdeka <florian.bezdeka@...mens.com>
Subject: Re: [PATCH v2 0/5] Defer throttle when task exits to user
Hello Ben,
On 7/3/2025 3:30 AM, Benjamin Segall wrote:
> Aaron Lu <ziqianlu@...edance.com> writes:
>
>> For pelt clock, I chose to keep the current behavior to freeze it on
>> cfs_rq's throttle time. The assumption is that tasks running in kernel
>> mode should not last too long, freezing the cfs_rq's pelt clock can keep
>> its load and its corresponding sched_entity's weight. Hopefully, this can
>> result in a stable situation for the remaining running tasks to quickly
>> finish their jobs in kernel mode.
>
> I suppose the way that this would go wrong would be CPU 1 using up all
> of the quota, and then a task waking up on CPU 2 and trying to run in
> the kernel for a while. I suspect pelt time needs to also keep running
> until all the tasks are asleep (and that's what we have been running at
> google with the version based on separate accounting, so we haven't
> accidentally done a large scale test of letting it pause).
Thinking out loud ...
One thing this can possibly do is create a lot of:
throttled -> partially unthrottled -> throttled
transitions when tasks wakeup on throttled hierarchy, run for a while,
and then go back to sleep. If the PELT clocks aren't frozen, this
either means:
1. Do a full walk_tg_tree_from() placing all the leaf cfs_rq that have
any load associated back onto the list and allow PELT to progress only
to then remove them again once tasks go back to sleep. A great many of
these transitions are possible theoretically which is not ideal.
2. Propagate the delta time where PELT was not frozen during unthrottle
and if it isn't 0, do an update_load_avg() to sync PELT. This will
increase the overhead of the tg_tree callback which isn't ideal
either. It can also complicate the enqueue path since the PELT of
the the cfs_rq hierarchy being enqueued may need correction before
the task can be enqueued.
I know Josh hates both approaches since tg_tree_walks are already very
expensive in your use cases and it has to be done in a non-preemptible
context holding the rq_lock but which do you think is the lesser of two
evils? Or is there a better solution that I have completely missed?
>
> Otherwise it does look ok, so long as we're ok with increasing distribute
> time again.
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists