[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150727032959.GB3328@fixme-laptop.cn.ibm.com>
Date: Mon, 27 Jul 2015 11:29:59 +0800
From: Boqun Feng <boqun.feng@...il.com>
To: Yuyang Du <yuyang.du@...el.com>
Cc: mingo@...nel.org, peterz@...radead.org,
linux-kernel@...r.kernel.org, pjt@...gle.com, bsegall@...gle.com,
morten.rasmussen@....com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, umgwanakikbuti@...il.com,
len.brown@...el.com, rafael.j.wysocki@...el.com,
arjan@...ux.intel.com, fengguang.wu@...el.com
Subject: Re: [PATCH v10 6/7] sched: Provide runnable_load_avg back to cfs_rq
On Mon, Jul 27, 2015 at 11:21:15AM +0800, Boqun Feng wrote:
> Hi Yuyang,
>
> On Mon, Jul 27, 2015 at 02:43:25AM +0800, Yuyang Du wrote:
> > Hi Boqun,
> >
> > On Tue, Jul 21, 2015 at 06:29:56PM +0800, Boqun Feng wrote:
> > > The point is that you have already tracked the sum of runnable_load_avg
> > > and blocked_load_avg in cfs_rq->avg.load_avg. If you're going to track
> > > part of the sum, you'd better track the one that's updated less
> > > frequently, right?
> > >
> > > Anyway, this idea just comes into my mind. I wonder which is udpated
> > > less frequently myself too. ;-) So I ask to see whether there is
> > > something we can improve.
> >
> > Actually, this is not the point.
> >
> > 1) blocked load is more "difficult" to track, hint, migrate.
> >
> > 2) r(t1) - b(t2) is not anything, hint, t1 != t2
>
> Please consider this patch below, which is not tested yet, just for
> discussion. This patch is based on 1-5 in your patchset and going to
> replace patch 6. Hope this could make my point clear.
>
> Thanks anyway for being patient with me ;-)
>
> Regards,
> Boqun
>
> ========================================================================
>
> Subject: [PATCH] sched: lazy blocked load tracking
>
> With this patch, cfs_rq_runnable_load_avg can be implemented as follow:
>
> static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq)
> {
> u64 now = cfs_rq_clock_task(cfs_rq);
> decay_cfs_rq_blocked_load(now, cfs_rq);
>
> return max_t(long, cfs_rq->avg.load_avg - cfs_rq->blocked_load_avg, 0);
> }
>
> ---
> kernel/sched/fair.c | 41 +++++++++++++++++++++++++++++++++++++++++
> kernel/sched/sched.h | 4 ++++
> 2 files changed, 45 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e977074..76beb81 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2625,6 +2625,20 @@ static __always_inline int __update_load_avg(u64 now, int cpu,
> return decayed;
> }
>
> +static inline u64 decay_cfs_rq_blocked_load(u64 now, struct cfs_rq *cfs_rq)
> +{
> + u64 decays;
> +
> + now = now >> 20;
> + decays = now - cfs_rq->last_blocked_load_decays;
> +
> + cfs_rq->blocked_load_sum = decay_load(cfs_rq->blocked_load_sum, decays);
> + cfs_rq->blocked_load_avg = div_u64(cfs->blocked_load_sum, LOAD_AVG_MAX);
> + cfs_rq->last_blocked_load_update_time = now;
Sorry for the typo, should be last_blocked_load_decays here ;-)
Regards,
Boqun
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists