[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xhsmhbjociso8.mognet@vschneid-thinkpadt14sgen2i.remote.csb>
Date: Mon, 18 Aug 2025 16:57:27 +0200
From: Valentin Schneider <vschneid@...hat.com>
To: Aaron Lu <ziqianlu@...edance.com>, Ben Segall <bsegall@...gle.com>, K
Prateek Nayak <kprateek.nayak@....com>, Peter Zijlstra
<peterz@...radead.org>, Chengming Zhou <chengming.zhou@...ux.dev>, Josh
Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Xi Wang <xii@...gle.com>
Cc: linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt
<rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>, Chuyi Zhou
<zhouchuyi@...edance.com>, Jan Kiszka <jan.kiszka@...mens.com>, Florian
Bezdeka <florian.bezdeka@...mens.com>, Songtang Liu
<liusongtang@...edance.com>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v3 4/5] sched/fair: Task based throttle time accounting
On 15/07/25 15:16, Aaron Lu wrote:
> @@ -5287,19 +5287,12 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> check_enqueue_throttle(cfs_rq);
> list_add_leaf_cfs_rq(cfs_rq);
> #ifdef CONFIG_CFS_BANDWIDTH
> - if (throttled_hierarchy(cfs_rq)) {
> + if (cfs_rq->pelt_clock_throttled) {
> struct rq *rq = rq_of(cfs_rq);
>
> - if (cfs_rq_throttled(cfs_rq) && !cfs_rq->throttled_clock)
> - cfs_rq->throttled_clock = rq_clock(rq);
> - if (!cfs_rq->throttled_clock_self)
> - cfs_rq->throttled_clock_self = rq_clock(rq);
> -
> - if (cfs_rq->pelt_clock_throttled) {
> - cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
> - cfs_rq->throttled_clock_pelt;
> - cfs_rq->pelt_clock_throttled = 0;
> - }
> + cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) -
> + cfs_rq->throttled_clock_pelt;
> + cfs_rq->pelt_clock_throttled = 0;
This is the only hunk of the patch that affects the PELT stuff; should this
have been included in patch 3 which does the rest of the PELT accounting changes?
> @@ -7073,6 +7073,9 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
> if (cfs_rq_is_idle(cfs_rq))
> h_nr_idle = h_nr_queued;
>
> + if (throttled_hierarchy(cfs_rq) && task_throttled)
> + record_throttle_clock(cfs_rq);
> +
Apologies if this has been discussed before.
So the throttled time (as reported by cpu.stat.local) is now accounted as
the time from which the first task in the hierarchy gets effectively
throttled - IOW the first time a task in a throttled hierarchy reaches
resume_user_mode_work() - as opposed to as soon as the hierarchy runs out
of quota.
The gap between the two shouldn't be much, but that should at the very
least be highlighted in the changelog.
AFAICT this is a purely user-facing stat; Josh/Tejun, any opinions on this?
Powered by blists - more mailing lists