[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241119113016.GB66918@pauld.westford.csb>
Date: Tue, 19 Nov 2024 06:30:16 -0500
From: Phil Auld <pauld@...hat.com>
To: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
kprateek.nayak@....com, wuyun.abel@...edance.com,
youssefesmat@...omium.org, tglx@...utronix.de
Subject: Re: [PATCH] sched/fair: Dequeue sched_delayed tasks when waking to a
busy CPU
On Thu, Nov 14, 2024 at 06:28:54AM -0500 Phil Auld wrote:
> On Thu, Nov 14, 2024 at 12:07:03PM +0100 Mike Galbraith wrote:
> > On Tue, 2024-11-12 at 17:15 +0100, Mike Galbraith wrote:
> > > On Tue, 2024-11-12 at 10:41 -0500, Phil Auld wrote:
> > > > On Tue, Nov 12, 2024 at 03:23:38PM +0100 Mike Galbraith wrote:
> > > >
> > > > >
> > > > > We don't however have to let sched_delayed block SIS though. Rendering
> > > > > them transparent in idle_cpu() did NOT wreck the progression, so
> > > > > maaaybe could help your regression.
> > > > >
> > > >
> > > > You mean something like:
> > > >
> > > > if (rq->nr_running > rq->h_nr_delayed)
> > > > return 0;
> > > >
> > > > in idle_cpu() instead of the straight rq->nr_running check?
> > >
> > > Yeah, close enough.
> >
> > The below is all you need.
> >
> > Watching blockage rate during part of a netperf scaling run without, a
> > bit over 2/sec was the highest it got, but with, that drops to the same
> > zero as turning off the feature, so... relevance highly unlikely but
> > not quite impossible?
> >
>
> I'll give this a try on my issue. This'll be simpler than the other way.
>
This, below, by itself, did not do help and caused a small slowdown on some
other tests. Did this need to be on top of the wakeup change?
Cheers,
Phil
> Thanks!
>
>
>
> Cheers,
> Phil
>
>
> > ---
> > kernel/sched/fair.c | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9454,11 +9454,15 @@ int can_migrate_task(struct task_struct
> >
> > /*
> > * We do not migrate tasks that are:
> > + * 0) not runnable (not useful here/now, but are annoying), or
> > * 1) throttled_lb_pair, or
> > * 2) cannot be migrated to this CPU due to cpus_ptr, or
> > * 3) running (obviously), or
> > * 4) are cache-hot on their current CPU.
> > */
> > + if (p->se.sched_delayed)
> > + return 0;
> > +
> > if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
> > return 0;
> >
> >
>
> --
>
>
--
Powered by blists - more mailing lists