[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160127143651.4de18ad9@luca-1225C>
Date: Wed, 27 Jan 2016 14:36:51 +0100
From: Luca Abeni <luca.abeni@...tn.it>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@....com>
Subject: Re: [RFC 4/8] Improve the tracking of active utilisation
Hi Peter,
On Tue, 19 Jan 2016 14:47:39 +0100
Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, Jan 19, 2016 at 01:20:13PM +0100, Luca Abeni wrote:
> > Hi Peter,
> >
> > On 01/14/2016 08:43 PM, Peter Zijlstra wrote:
> > >On Thu, Jan 14, 2016 at 04:24:49PM +0100, Luca Abeni wrote:
> > >>This patch implements a more theoretically sound algorithm for
> > >>thracking the active utilisation: instead of decreasing it when a
> > >>task blocks, use a timer (the "inactive timer", named after the
> > >>"Inactive" task state of the GRUB algorithm) to decrease the
> > >>active utilisaation at the so called "0-lag time".
> > >
> > >See also the large-ish comment in __setparam_dl().
> > >
> > >If we go do proper 0-lag, as GRUB requires, then we might as well
> > >use it for that.
> > Just to check if I understand correctly:
> > I would need to remove "dl_b->total_bw -= p->dl.dl_bw;" from
> > task_dead_dl(), and __dl_clear() from "else if (!dl_policy(policy)
> > && task_has_dl_policy(p))" in dl_overflow(). Then, arm the
> > inactive_timer in these cases, and add the __dl_clear() in the "if
> > (!dl_task(p))" in inactive_task_timer()... Right?
>
> Correct.
>
> > If this understanding is correct (modulo some details that I'll
> > figure out during testing), I'll try this.
>
> Yes, there's bound to be 'fun' details..
Ok, so I implemented this idea, and I am currently testing it...
The first experiments seem to show that there are no problems, but I
just tried some simple workload (rt-app, or some other periodic taskset
scheduled by SCHED_DEADLINE). Do you have suggestions for more
"interesting" (and meaningful) tests/experiments?
Thanks,
Luca
Powered by blists - more mailing lists