[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260210185212.GJ3016024@noisy.programming.kicks-ass.net>
Date: Tue, 10 Feb 2026 19:52:12 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Doug Smythies <dsmythies@...us.net>
Cc: 'K Prateek Nayak' <kprateek.nayak@....com>, mingo@...nel.org,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, vschneid@...hat.com, linux-kernel@...r.kernel.org,
wangtao554@...wei.com, quzicheng@...wei.com,
wuyun.abel@...edance.com
Subject: Re: [PATCH 0/4] sched: Various reweight_entity() fixes
On Tue, Feb 10, 2026 at 07:41:58AM -0800, Doug Smythies wrote:
> On 2026.02.09.07:47 Peter Zijlstra wrote:
> > On Wed, Feb 04, 2026 at 03:45:58PM +0530, K Prateek Nayak wrote:
> >
> >> # Overflow on enqueue
> >>
> >> <...>-102371 [255] ... : __enqueue_entity: Overflowed cfs_rq:
> >> <...>-102371 [255] ... : dump_h_overflow_cfs_rq: cfs_rq: depth(0) weight(90894772) nr_queued(2) sum_w_vruntime(0)
> sum_weight(0) zero_vruntime(701164930256050) sum_shift(0) avg_vruntime(701809615900788)
> >> <...>-102371 [255] ... : dump_h_overflow_entity: se: weight(3508) vruntime(701809615900788) slice(2800000)
> deadline(701810568648095) curr?(1) task?(1) <-------- cfs_rq->curr
> >> <...>-102371 [255] ... : __enqueue_entity: Overflowed se:
> >> <...>-102371 [255] ... : dump_h_overflow_entity: se: weight(90891264) vruntime(701808975077099) slice(2800000)
> deadline(701808975109401) curr?(0) task?(0) <-------- new se
> >
> > So I spend a whole time trying to reproduce the splat, but alas.
> >
> > That said, I did spot something 'funny' in the above, note that
> > zero_vruntime and avg_vruntime/curr->vruntime are significantly apart.
> > That is not something that should happen. zero_vruntime is supposed to
> > closely track avg_vruntime.
> >
> > That lead me to hypothesise that there is a problem tracking
> > zero_vruntime when there is but a single runnable task, and sure
> > enough, I could reproduce that, albeit not at such a scale as to lead to
> > such problems (probably too much noise on my machine).
> >
> > I ended up with the below; and I've already pushed out a fresh
> > queue/sched/core. Could you please test again?
>
> I tested this "V2". The CPU migration times test results are not good.
> We expect the sample time to not deviate from the nominal 1 second
> by more than 10 milliseconds for this test. The test ran for about
> 13 hours and 41 minutes (49,243 samples). Histogram of times:
>
> It seems something has regressed over the last year.
> Our threshold of 10 milliseconds was rather arbitrary.
Moo.. I'll go dig out that benchmark too.
Powered by blists - more mailing lists