[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250314084854.GA1633113@bytedance>
Date: Fri, 14 Mar 2025 16:48:54 +0800
From: Aaron Lu <ziqianlu@...edance.com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Valentin Schneider <vschneid@...hat.com>,
Ben Segall <bsegall@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Chuyi Zhou <zhouchuyi@...edance.com>
Subject: Re: [RFC PATCH 2/7] sched/fair: Handle throttle path for task based
throttle
On Thu, Mar 13, 2025 at 11:44:49PM +0530, K Prateek Nayak wrote:
> Hello Aaron,
>
> P.S. I've fixed the wrapped lines and have been testing the series. So
> far I haven't run into any issues yet on my machine. Will report back if
> anything surface.
Thanks a lot for taking the time to review and test.
>
> I've few comments inlined below.
>
> On 3/13/2025 12:51 PM, Aaron Lu wrote:
>
> [..snip..]
>
> > +static inline void task_throttle_setup_work(struct task_struct *p)
> > +{
> > + /*
> > + * Kthreads and exiting tasks don't return to userspace, so adding the
> > + * work is pointless
> > + */
> > + if ((p->flags & (PF_EXITING | PF_KTHREAD)))
> > + return;
> > +
> > + if (task_has_throttle_work(p))
> > + return;
> > +
> > + task_work_add(p, &p->sched_throttle_work, TWA_RESUME);
>
> Does it make sense to add a throttle work to a delayed task? It may be
I missed the case that a delayed task can still be on cfs_rq and I agree
there is no need to add throttle work to a delayed task.
> dequeued soon and when it is queued back, the throttle situation might
> have changed but the work is unnecessarily run. Could the throttle work
> be instead added at the point of enqueue for delayed tasks?
Yes. If a delayed task gets re-queued and its cfs_rq is in throttled
hierarchy, it should be added the throttle work.
>
> > +}
> > +
> > static int tg_throttle_down(struct task_group *tg, void *data)
> > {
> > struct rq *rq = data;
> > struct cfs_rq *cfs_rq = tg->cfs_rq[cpu_of(rq)];
> > + struct task_struct *p;
> > + struct rb_node *node;
> > +
> > + cfs_rq->throttle_count++;
> > + if (cfs_rq->throttle_count > 1)
> > + return 0;
> >
> > /* group is entering throttled state, stop time */
> > - if (!cfs_rq->throttle_count) {
> > - cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
> > - list_del_leaf_cfs_rq(cfs_rq);
> > + cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq);
>
> Once cencern here is that the PELT is seemingly frozen despite the
> hierarchy being runnable. I've still not tracked down whether it'll
> cause any problems once unthrottled and all throttled time is negated
> from the pelt clock but is there any concerns here?
I chose to do it this way because:
1 I expect most of the time, if a task has to continue to run after its
cfs_rq gets throttled, the time is relatively small so should not cause
much impact. But I agree there can be times a task runs relatively long;
2 I think the original intent to freeze cfs_rq's pelt clock on throttle
is so that on unthrottle, it can retore its loada(without its load being
decayed etc.). If I chose to not freeze its pelt clock on throttle
because some task is still running in kernel mode, since some of this
cfs_rq's tasks are throttled, its load can become smaller and this can
impact its load on unthrottle.
I think both approach is not perfect, so I chose the simple one for now
:) Not sure if my thinking is correct though.
> Maybe this can be done at dequeue when cfs_rq->nr_queued on a
> throttled_hierarchy() reached 0.
Yes, this looks more consistent, maybe I should change to this approach.
> > + list_del_leaf_cfs_rq(cfs_rq);
> >
> > - SCHED_WARN_ON(cfs_rq->throttled_clock_self);
> > - if (cfs_rq->nr_queued)
> > - cfs_rq->throttled_clock_self = rq_clock(rq);
> > + SCHED_WARN_ON(cfs_rq->throttled_clock_self);
> > + if (cfs_rq->nr_queued)
> > + cfs_rq->throttled_clock_self = rq_clock(rq);
> > +
> > + WARN_ON_ONCE(!list_empty(&cfs_rq->throttled_limbo_list));
> > + /*
> > + * rq_lock is held, current is (obviously) executing this in kernelspace.
> > + *
> > + * All other tasks enqueued on this rq have their saved PC at the
> > + * context switch, so they will go through the kernel before returning
> > + * to userspace. Thus, there are no tasks-in-userspace to handle, just
> > + * install the task_work on all of them.
> > + */
> > + node = rb_first(&cfs_rq->tasks_timeline.rb_root);
> > + while (node) {
> > + struct sched_entity *se = __node_2_se(node);
> > +
> > + if (!entity_is_task(se))
> > + goto next;
> > +
> > + p = task_of(se);
> > + task_throttle_setup_work(p);
> > +next:
> > + node = rb_next(node);
> > + }
> > +
> > + /* curr is not in the timeline tree */
> > + if (cfs_rq->curr && entity_is_task(cfs_rq->curr)) {
>
> I believe we can reach here from pick_next_task_fair() ->
> check_cfs_rq_runtime() -> throttle_cfs_rq() in which case cfs_rq->curr
> will still be set despite the task being blocked since put_prev_entity()
> has not been called yet.
>
> I believe there should be a check for task_on_rq_queued() here for the
> current task.
Ah right, I'll see how to fix this.
Thanks,
Aaron
> > + p = task_of(cfs_rq->curr);
> > + task_throttle_setup_work(p);
> > }
> > - cfs_rq->throttle_count++;
> >
> > return 0;
> > }
> >
>
> [..snip..]
>
> --
> Thanks and Regards,
> Prateek
>
Powered by blists - more mailing lists