[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZB1NU1Yc8DSi4zrW@chenyu5-mobl1>
Date: Fri, 24 Mar 2023 15:12:19 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: <mingo@...nel.org>, <vincent.guittot@...aro.org>,
<linux-kernel@...r.kernel.org>, <juri.lelli@...hat.com>,
<dietmar.eggemann@....com>, <rostedt@...dmis.org>,
<bsegall@...gle.com>, <mgorman@...e.de>, <bristot@...hat.com>,
<corbet@....net>, <qyousef@...alina.io>, <chris.hyser@...cle.com>,
<patrick.bellasi@...bug.net>, <pjt@...gle.com>, <pavel@....cz>,
<qperret@...gle.com>, <tim.c.chen@...ux.intel.com>,
<joshdon@...gle.com>, <timj@....org>, <kprateek.nayak@....com>,
<youssefesmat@...omium.org>, <joel@...lfernandes.org>
Subject: Re: [PATCH 06/10] sched/fair: Add avg_vruntime
On 2023-03-21 at 17:04:58 +0100, Peter Zijlstra wrote:
> On Tue, Mar 21, 2023 at 09:58:13PM +0800, Chen Yu wrote:
> > On 2023-03-06 at 14:25:27 +0100, Peter Zijlstra wrote:
> > [...]
> > >
> > > +/*
> > > + * Compute virtual time from the per-task service numbers:
> > > + *
> > > + * Fair schedulers conserve lag: \Sum lag_i = 0
> > > + *
> > > + * lag_i = S - s_i = w_i * (V - v_i)
> > > + *
> > The definination of above lag_i seems to be inconsistent with the defininatin
> > of se->lag in PATCH 8. Maybe rename lag_i to something other to avoid confusion?
>
> Yeah, I ran into that the other day, I think I'll introduce vlag_i = V - v_i
> or so.
>
> > > + * \Sum lag_i = 0 -> \Sum w_i * (V - v_i) = V * \Sum w_i - \Sum w_i * v_i = 0
> > > + *
> > > + * From which we solve V:
> > > + *
> > > + * \Sum v_i * w_i
> > > + * V = --------------
> > > + * \Sum w_i
> > > + *
> > > + * However, since v_i is u64, and the multiplcation could easily overflow
> > > + * transform it into a relative form that uses smaller quantities:
> > > + *
> > > + * Substitute: v_i == (v_i - v) + v
> > > + *
> > > + * \Sum ((v_i - v) + v) * w_i \Sum (v_i - v) * w_i
> > > + * V = -------------------------- = -------------------- + v
> > > + * \Sum w_i \Sum w_i
> > > + *
> > > + *
>
> > Not sure if I understand it correctly, does it mean (v_i - v) * w_i will not
> > overflow? If the weight of task is 15 (nice 19), then if v_i - v > (S64_MAX / 15)
> > it gets overflow. Is it possible that v_i is much larger than cfs_rq->min_vruntime
> > in this case?
>
> Or worse, SCHED_IDLE, where weight is 2 (IIRC) or cgroups, then vtime
> advances at 512 times realtime. Now, the tick puts a limit on how long
> we'll overshoot these super low weight entities, for HZ=1000 we still
> only get 0.5s of vtime for weight=2.
>
> That would be only 30 bits used, except we use double FIXEDPOINT_SHIFT
> on 64bit, so we'll end up at 40-ish.
>
> That should give us enough room to carry an average of deltas around
> min_vruntime.
>
I'm trying to digest how ticks could prevent the overflow.
In update_curr() -> update_min_vruntime(cfs_rq), the cfs_rq->min_vruntime
is set to
max (cfs_rq->min_vruntime, min(curr->vruntime, leftmost(se->vruntime)))
so, although curr->vruntime increase by 0.5 seconds in each tick,
the leftmost(se->vruntime) could still be very small and unchanged,
thus the delta between v_i and cfs_rq->min_vruntime is still large.
Instead sysctl_sched_latency could decide how far it is between the
se.vruntime and the cfs_rq.min_vruntime, by calculating the vruntime
delta between task1 and task2:
sched_vslice(task1) = (NICE0_LOAD/se1.weight) * (w1/Sum wi * sysctl_sched_latency)
sched_vslice(task2) = (NICE0_LOAD/se2.weight) * (w2/Sum wi * sysctl_sched_latency)
Besides in patch 10, entity_eligible() checks
\Sum (v_i - v)*w_i >= (v_i - v)*(\Sum w_i)
and the \Sum w_i could become large if there are many runnable tasks and
bring overflow? But yes, if the vi -v is very small we can get rid of this
issue IMO.
thanks,
Chenyu
> But yes, I've seen this go sideways and I need to stare a bit more at
> this. One of the things I've considered is changing the min_vruntime
> update rules to instead move to avg_vruntime() to minimize the deltas.
> But I've not yet actually written that code.
>
Powered by blists - more mailing lists