[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhPtCyRmPxa0DpMe@chenyu5-mobl2>
Date: Mon, 8 Apr 2024 21:11:39 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Abel Wu <wuyun.abel@...edance.com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Juri Lelli
<juri.lelli@...hat.com>, Tim Chen <tim.c.chen@...el.com>, Tiwei Bie
<tiwei.btw@...group.com>, Honglei Wang <wanghonglei@...ichuxing.com>, "Aaron
Lu" <aaron.lu@...el.com>, Chen Yu <yu.chen.surf@...il.com>,
<linux-kernel@...r.kernel.org>, kernel test robot <oliver.sang@...el.com>
Subject: Re: [RFC PATCH] sched/eevdf: Return leftmost entity in pick_eevdf()
if no eligible entity is found
On 2024-04-08 at 13:58:33 +0200, Peter Zijlstra wrote:
> On Thu, Feb 29, 2024 at 05:00:18PM +0800, Abel Wu wrote:
>
> > > According to the log, vruntime is 18435852013561943404, the
> > > cfs_rq->min_vruntime is 763383370431, the load is 629 + 2048 = 2677,
> > > thus:
> > > s64 delta = (s64)(18435852013561943404 - 763383370431) = -10892823530978643
> > > delta * 2677 = 7733399554989275921
> > > that is to say, the multiply result overflow the s64, which turns the
> > > negative value into a positive value, thus eligible check fails.
> >
> > Indeed.
>
> From the data presented it looks like min_vruntime is wrong and needs
> update. If you can readily reproduce this, dump the vruntime of all
> tasks on the runqueue and see if min_vruntime is indeed correct.
>
This was the dump of all the entities on the tree, from left to right,
and also from top down in middle order traverse, when this issue happens:
[ 514.461242][ T8390] cfs_rq avg_vruntime:386638640128 avg_load:2048 cfs_rq->min_vruntime:763383370431
[ 514.535935][ T8390] current on_rq se 0xc5851400, deadline:18435852013562231446
min_vruntime:18437121115753667698 vruntime:18435852013561943404, load:629
[ 514.536772][ T8390] Traverse rb-tree from left to right
[ 514.537138][ T8390] se 0xec1234e0 deadline:763384870431 min_vruntime:763383370431 vruntime:763383370431 non-eligible <-- leftmost se
[ 514.537835][ T8390] se 0xec4fcf20 deadline:763762447228 min_vruntime:763760947228 vruntime:763760947228 non-eligible
[ 514.538539][ T8390] Traverse rb-tree from topdown
[ 514.538877][ T8390] middle se 0xec1234e0 deadline:763384870431 min_vruntime:763383370431 vruntime:763383370431 non-eligible <-- root se
[ 514.539605][ T8390] middle se 0xec4fcf20 deadline:763762447228 min_vruntime:763760947228 vruntime:763760947228 non-eligible
The tree looks like:
se (0xec1234e0)
|
|
----> se (0xec4fcf20)
The root se 0xec1234e0 is also the leftmost se, its min_vruntime and vruntime are both 763383370431,
which is aligned with cfs_rq->min_vruntime. It seems that the cfs_rq's min_vruntime gets updated correctly,
because it is monotonic increasing.
My guess is that, for some reason, one newly forked se in a newly created task group, in the rb-tree has not
been picked for a long time(maybe not eligible). Its vruntime stopped at the negative value(near (unsigned long)(-(1LL << 20))
for a long time, its vruntime is long behind the cfs_rq->vruntime, thus the overflow happens.
thanks,
Chenyu
> > > So where is this insane huge vruntime 18435852013561943404 coming from?
> > > My guess is that, it is because the initial value of cfs_rq->min_vruntime
> > > is set to (unsigned long)(-(1LL << 20)). If the task(watchdog in this case)
> > > seldom scheduled in, its vruntime might not move forward too much and
> > > remain its original value by previous place_entity().
> >
> > So why not just initialize to 0? The (unsigned long)(-(1LL << 20))
> > thing is dangerous as it can easily blow up lots of calculations in
> > lag, key, avg_vruntime and so on.
>
> The reason is to ensure the wrap-around logic works -- which it must,
> because with the weighting thing, the vruntime can wrap quite quickly,
> something like one day IIRC (20 bit for precision etc.)
>
> Better to have the wrap around happen quickly after boot and have
> everybody suffer, rather than have it be special and hard to reproduce.
Powered by blists - more mailing lists