[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YSS1/rqqsGaBX/yQ@hirez.programming.kicks-ass.net>
Date: Tue, 24 Aug 2021 11:03:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vineeth Pillai <vineethrp@...il.com>
Cc: Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Joel Fernandes <joel@...lfernandes.org>,
linux-kernel@...r.kernel.org, tao.zhou@...ux.dev
Subject: Re: [PATCH] sched/core: fix pick_next_task 'max' tracking
On Mon, Aug 23, 2021 at 04:25:28PM -0400, Vineeth Pillai wrote:
> Hi Peter,
>
>
> > > Here, we should have instead updated 'max' when picking for SMT-1. Note
> > > that this code would eventually have righted itself, since the retry
> > > loop would re-pick p2, and update 'max' accordingly. However, this patch
> > > avoids the extra round-trip.
> >
> > Going with the observation Tao made; how about we rewrite the whole lot
> > to not be mind-bending complicated :-)
> >
> > How's this? It seems to build and pass the core-sched selftest thingy
> > (so it must be perfect, right? :-)
> >
> Nice, the code is much simpler now :-). A minor suggestion down..
>
> > - for_each_cpu(i, smt_mask) {
> > - struct rq *rq_i = cpu_rq(i);
> > -
> > + /*
> > + * For each thread: do the regular task pick and find the max prio task
> > + * amongst them.
> > + *
> > + * Tie-break prio towards the current CPU
> > + */
> > + for_each_cpu_wrap(i, smt_mask, cpu) {
> > + rq_i = cpu_rq(i);
> > rq_i->core_pick = NULL;
> >
> > if (i != cpu)
> > update_rq_clock(rq_i);
> > +
> > + for_each_class(class) {
> > + p = rq_i->core_temp = class->pick_task(rq_i);
> I think we can use core_pick to store the pick here and core_temp
> might not be required. What do you feel?
Indeed we can; makes the code a little less obvious but saves a few
bytes.
Let me go do that and also attempt a Changelog to go with it ;-)
Powered by blists - more mailing lists