[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab91bfe1-4331-4e33-87fa-4f4fe96adb00@amd.com>
Date: Thu, 20 Mar 2025 12:23:01 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Josh Don <joshdon@...gle.com>, Aaron Lu <ziqianlu@...edance.com>
CC: Valentin Schneider <vschneid@...hat.com>, Ben Segall <bsegall@...gle.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, <linux-kernel@...r.kernel.org>,
Juri Lelli <juri.lelli@...hat.com>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Mel Gorman
<mgorman@...e.de>, Chengming Zhou <chengming.zhou@...ux.dev>, Chuyi Zhou
<zhouchuyi@...edance.com>, Xi Wang <xii@...gle.com>
Subject: Re: [RFC PATCH 2/7] sched/fair: Handle throttle path for task based
throttle
Hello Josh,
On 3/16/2025 8:55 AM, Josh Don wrote:
> Hi Aaron,
>
>> static int tg_throttle_down(struct task_group *tg, void *data)
>> {
>> struct rq *rq = data;
>> struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
>> + struct task_struct *p;
>> + struct rb_node *node;
>> +
>> + cfs_rq->throttle_count++;
>> + if (cfs_rq->throttle_count > 1)
>> + return 0;
>>
>> /* group is entering throttled state, stop time */
>> - if (!cfs_rq->throttle_count) {
>> - cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
>> - list_del_leaf_cfs_rq(cfs_rq);
>> + cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
>> + list_del_leaf_cfs_rq(cfs_rq);
>>
>> - SCHED_WARN_ON(cfs_rq->throttled_clock_self);
>> - if (cfs_rq->nr_queued)
>> - cfs_rq->throttled_clock_self = rq_clock(rq);
>> + SCHED_WARN_ON(cfs_rq->throttled_clock_self);
>> + if (cfs_rq->nr_queued)
>> + cfs_rq->throttled_clock_self = rq_clock(rq);
>> +
>> + WARN_ON_ONCE(!list_empty(&cfs_rq->throttled_limbo_list));
>> + /*
>> + * rq_lock is held, current is (obviously) executing this in kernelspace.
>> + *
>> + * All other tasks enqueued on this rq have their saved PC at the
>> + * context switch, so they will go through the kernel before returning
>> + * to userspace. Thus, there are no tasks-in-userspace to handle, just
>> + * install the task_work on all of them.
>> + */
>> + node = rb_first(&cfs_rq->tasks_timeline.rb_root);
>> + while (node) {
>> + struct sched_entity *se = __node_2_se(node);
>> +
>> + if (!entity_is_task(se))
>> + goto next;
>> +
>> + p = task_of(se);
>> + task_throttle_setup_work(p);
>> +next:
>> + node = rb_next(node);
>> + }
>
> I'd like to strongly push back on this approach. This adds quite a lot
> of extra computation to an already expensive path
> (throttle/unthrottle). e.g. this function is part of the cgroup walk
> and so it is already O(cgroups) for the number of cgroups in the
> hierarchy being throttled. This gets even worse when you consider that
> we repeat this separately across all the cpus that the
> bandwidth-constrained group is running on. Keep in mind that
> throttle/unthrottle is done with rq lock held and IRQ disabled.
On this note, do you have any statistics for how many tasks are
throttled per-CPU on your system. The info from:
sudo ./bpftrace -e "kprobe:throttle_cfs_rq { \
@nr_queued[((struct cfs_rq *)$1)->h_nr_queued] = count(); \
@nr_runnable[((struct cfs_rq *)$1)->h_nr_runnable] = count(); \
}"
could help estimate the worst case times with per-task throttling
we are expecting.
>
> In K Prateek's last RFC, there was discussion of using context
> tracking; did you consider that approach any further? We could keep
> track of the number of threads within a cgroup hierarchy currently in
> kernel mode (similar to h_nr_runnable), and thus simplify down the
> throttling code here.
Based on Chengming's latest suggestion, we can keep tg_throttle_down()
as is and tag the task at pick using throttled_hierarchy() which will
work too.
Since it'll most likely end up doing:
if (throttled_hierarchy(cfs_rq_of(&p->se)))
task_throttle_setup_work(p);
The only overhead for the users not using CFS bandwidth is just the
cfs_rq->throttle_count check. If it was set, you are simply moving
the overhead to set the throttle work from the throttle path to
the pick path for the throttled tasks only and it also avoids
adding unnecessary work to task that may never get picked before
unthrottle.
>
> Best,
> Josh
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists