[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201125083717.GM2414@hirez.programming.kicks-ass.net>
Date: Wed, 25 Nov 2020 09:37:17 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Nishanth Aravamudan <naravamudan@...italocean.com>,
Julien Desfossez <jdesfossez@...italocean.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Vineeth Pillai <viremana@...ux.microsoft.com>,
Aaron Lu <aaron.lwe@...il.com>,
Aubrey Li <aubrey.intel@...il.com>, tglx@...utronix.de,
linux-kernel@...r.kernel.org, mingo@...nel.org,
torvalds@...ux-foundation.org, fweisbec@...il.com,
keescook@...omium.org, kerrnel@...gle.com,
Phil Auld <pauld@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>, vineeth@...byteword.org,
Chen Yu <yu.c.chen@...el.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Agata Gruza <agata.gruza@...el.com>,
Antonio Gomez Iglesias <antonio.gomez.iglesias@...el.com>,
graf@...zon.com, konrad.wilk@...cle.com, dfaggioli@...e.com,
pjt@...gle.com, rostedt@...dmis.org, derkling@...gle.com,
benbjiang@...cent.com,
Alexandre Chartre <alexandre.chartre@...cle.com>,
James.Bottomley@...senpartnership.com, OWeisse@...ch.edu,
Dhaval Giani <dhaval.giani@...cle.com>,
Junaid Shahid <junaids@...gle.com>, jsbarnes@...gle.com,
chris.hyser@...cle.com, Ben Segall <bsegall@...gle.com>,
Josh Don <joshdon@...gle.com>, Hao Luo <haoluo@...gle.com>,
Tom Lendacky <thomas.lendacky@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Tim Chen <tim.c.chen@...el.com>
Subject: Re: [PATCH -tip 12/32] sched: Simplify the core pick loop for
optimized case
On Tue, Nov 24, 2020 at 12:04:30PM -0500, Joel Fernandes wrote:
> Hi Peter,
>
> On Tue, Nov 24, 2020 at 01:04:38PM +0100, Peter Zijlstra wrote:
> > On Tue, Nov 17, 2020 at 06:19:42PM -0500, Joel Fernandes (Google) wrote:
> > > + /*
> > > + * Optimize for common case where this CPU has no cookies
> > > + * and there are no cookied tasks running on siblings.
> > > + */
> > > + if (!need_sync) {
> > > + for_each_class(class) {
> > > + next = class->pick_task(rq);
> > > + if (next)
> > > + break;
> > > + }
> > > +
> > > + if (!next->core_cookie) {
> > > + rq->core_pick = NULL;
> > > + goto done;
> > > + }
> > > need_sync = true;
> > > }
> >
> > This isn't what I send you here:
> >
> > https://lkml.kernel.org/r/20201026093131.GF2628@hirez.programming.kicks-ass.net
>
> I had replied to it here with concerns about the effects of newly idle
> balancing not being reverseable, it was only a theoretical concern:
> http://lore.kernel.org/r/20201105185019.GA2771003@google.com
Gah, missed that. I don't think that matters much see:
put_prev_task_balance() calling balance_fair().
> > Specifically, you've lost the whole cfs-cgroup optimization.
>
> Are you referring to this optimization in pick_next_task_fair() ?
>
> /*
> * Since we haven't yet done put_prev_entity and if the
> * selected task
> * is a different task than we started out with, try
> * and touch the
> * least amount of cfs_rqs.
> */
Yep, that. The giant FAIR_GROUP_SCHED hunk. The thing that makes
all of pick_next_task() more complicated than it really wants to be.
> You are right, we wouldn't get that with just calling pick_task_fair(). We
> did not have this in v8 series either though.
>
> Also, if the task is a cookied task, then I think you are doing more work
> with your patch due to the extra put_prev_task().
Yes, but only if you mix cookie tasks with non-cookie tasks and schedule
two non-cookie tasks back-to-back. I don't think we care overly much
about that.
I think it makes more sense to ensure that if you have core-sched
enabled on your machine and have a (core-aligned) parition with
non-cookie tasks, scheduling has works as 'normal' as possible.
> > What was wrong/not working with the below?
>
> Other than the new idle balancing, IIRC it was also causing instability.
> Maybe we can considering this optimization in the future if that's Ok with
> you?
Hurmph.. you don't happen to remember what went splat?
Powered by blists - more mailing lists