[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAKfTPtDVQK0DryE9XCgcdXk1Az4NjDsf+Cesf1Fq8=qV-mQVzQ@mail.gmail.com>
Date: Wed, 13 Dec 2023 09:24:36 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Abel Wu <wuyun.abel@...edance.com>
Cc: WangJinchao <wangjinchao@...sion.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org, stone.xulei@...sion.com
Subject: Re: [PATCH v2] sched/fair: merge same code in enqueue_task_fair
On Wed, 13 Dec 2023 at 09:19, Abel Wu <wuyun.abel@...edance.com> wrote:
>
> Hi Jinchao,
>
> On 12/13/23 3:12 PM, WangJinchao Wrote:
> > The code below is duplicated in two for loops and need to be
> > consolidated
>
> It doesn't need to, but it can actually bring some benefit from
> the point of view of text size, especially in warehouse-scale
> computers where icache is extremely contended.
>
> add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-56 (-56)
> Function old new delta
> enqueue_task_fair 936 880 -56
> Total: Before=64899, After=64843, chg -0.09%
>
> >
> > Signed-off-by: WangJinchao <wangjinchao@...sion.com>
> > ---
> > kernel/sched/fair.c | 31 ++++++++-----------------------
> > 1 file changed, 8 insertions(+), 23 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index d7a3c63a2171..e1373bfd4f2e 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6681,30 +6681,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
> >
> > for_each_sched_entity(se) {
> > - if (se->on_rq)
> > - break;
> > cfs_rq = cfs_rq_of(se);
> > - enqueue_entity(cfs_rq, se, flags);
> > -
> > - cfs_rq->h_nr_running++;
> > - cfs_rq->idle_h_nr_running += idle_h_nr_running;
> > -
> > - if (cfs_rq_is_idle(cfs_rq))
> > - idle_h_nr_running = 1;
> > -
> > - /* end evaluation on encountering a throttled cfs_rq */
> > - if (cfs_rq_throttled(cfs_rq))
> > - goto enqueue_throttle;
> > -
> > - flags = ENQUEUE_WAKEUP;
> > - }
> > -
> > - for_each_sched_entity(se) {
> > - cfs_rq = cfs_rq_of(se);
> > -
> > - update_load_avg(cfs_rq, se, UPDATE_TG);
> > - se_update_runnable(se);
> > - update_cfs_group(se);
> > + if (se->on_rq) {
> > + update_load_avg(cfs_rq, se, UPDATE_TG);
> > + se_update_runnable(se);
> > + update_cfs_group(se);
> > + } else {
> > + enqueue_entity(cfs_rq, se, flags);
> > + flags = ENQUEUE_WAKEUP;
> > + }
> >
> > cfs_rq->h_nr_running++;
> > cfs_rq->idle_h_nr_running += idle_h_nr_running;
>
> I have no strong opinon about this 'cleanup', but the same pattern
> can also be found in dequeue_task_fair() and I think it would be
> better get them synchronized.
I agree, I don't see any benefit from this change
>
> Thanks,
> Abel
Powered by blists - more mailing lists