[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241121212145.GG394828@pauld.westford.csb>
Date: Thu, 21 Nov 2024 16:21:45 -0500
From: Phil Auld <pauld@...hat.com>
To: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
kprateek.nayak@....com, wuyun.abel@...edance.com,
youssefesmat@...omium.org, tglx@...utronix.de
Subject: Re: [PATCH] sched/fair: Dequeue sched_delayed tasks when waking to a
busy CPU
On Thu, Nov 21, 2024 at 07:07:04AM -0500 Phil Auld wrote:
> On Thu, Nov 21, 2024 at 06:56:28AM -0500 Phil Auld wrote:
> > On Wed, Nov 20, 2024 at 07:37:39PM +0100 Mike Galbraith wrote:
> > > On Tue, 2024-11-19 at 12:51 +0100, Mike Galbraith wrote:
> > > > On Tue, 2024-11-19 at 06:30 -0500, Phil Auld wrote:
> > > > >
> > > > > This, below, by itself, did not do help and caused a small slowdown on some
> > > > > other tests. Did this need to be on top of the wakeup change?
> > > >
> > > > No, that made a mess.
> > >
> > > Rashly speculating that turning mobile kthread component loose is what
> > > helped your write regression...
> > >
> > > You could try adding (p->flags & PF_KTHREAD) to the wakeup patch to
> > > only turn hard working kthreads loose to try to dodge service latency.
> > > Seems unlikely wakeup frequency * instances would combine to shred fio
> > > the way turning tbench loose did.
> > >
> >
> > Thanks, I'll try that.
> >
>
> Also, fwiw, I think there is another report here
>
> https://lore.kernel.org/lkml/392209D9-5AC6-4FDE-8D84-FB8A82AD9AEF@oracle.com/
>
> which seems to be the same thing but mis-bisected. I've asked them to try
> with NO_DELAY_DEQUEUE just to be sure. But it looks like a duck.
>
But it does not quack like one. Their issue did not go away with
NO_DELAY_DEQUEUE so something different is causing that one.
>
> Cheers,
> Phil
>
> --
>
>
--
Powered by blists - more mailing lists