[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250314114011.GH1633113@bytedance>
Date: Fri, 14 Mar 2025 19:40:11 +0800
From: Aaron Lu <ziqianlu@...edance.com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Valentin Schneider <vschneid@...hat.com>,
Ben Segall <bsegall@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Chuyi Zhou <zhouchuyi@...edance.com>
Subject: Re: [External] Re: [RFC PATCH 5/7] sched/fair: Take care of
group/affinity/sched_class change for throttled task
On Fri, Mar 14, 2025 at 10:21:15AM +0530, K Prateek Nayak wrote:
> Hello Aaron,
>
> On 3/13/2025 12:51 PM, Aaron Lu wrote:
> [..snip..]
>
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5876,8 +5876,8 @@ static void throttle_cfs_rq_work(struct
> > callback_head *work)
> >
> > update_rq_clock(rq);
> > WARN_ON_ONCE(!list_empty(&p->throttle_node));
> > - list_add(&p->throttle_node, &cfs_rq->throttled_limbo_list);
> > dequeue_task_fair(rq, p, DEQUEUE_SLEEP | DEQUEUE_SPECIAL);
> > + list_add(&p->throttle_node, &cfs_rq->throttled_limbo_list);
> > resched_curr(rq);
>
> nit. Perhaps this bit can be moved to Patch 2 to consolidate all
> changes in throttle_cfs_rq_work()
No problem.
I placed it here to better illustrate why list_add() has to be done
after dequeue_task_fair().
Thanks,
Aaron
> >
> > out_unlock:
>
> [..snip..]
>
> --
> Thanks and Regards,
> Prateek
>
Powered by blists - more mailing lists