[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190722122828.GG25636@localhost.localdomain>
Date: Mon, 22 Jul 2019 14:28:28 +0200
From: Juri Lelli <juri.lelli@...hat.com>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: peterz@...radead.org, mingo@...hat.com, rostedt@...dmis.org,
tj@...nel.org, linux-kernel@...r.kernel.org,
luca.abeni@...tannapisa.it, claudio@...dence.eu.com,
tommaso.cucinotta@...tannapisa.it, bristot@...hat.com,
mathieu.poirier@...aro.org, lizefan@...wei.com, longman@...hat.com,
cgroups@...r.kernel.org
Subject: Re: [PATCH v9 4/8] sched/deadline: Fix bandwidth accounting at all
levels after offline migration
On 22/07/19 13:07, Dietmar Eggemann wrote:
> On 7/19/19 3:59 PM, Juri Lelli wrote:
>
> [...]
>
> > @@ -557,6 +558,38 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
> > double_lock_balance(rq, later_rq);
> > }
> >
> > + if (p->dl.dl_non_contending || p->dl.dl_throttled) {
> > + /*
> > + * Inactive timer is armed (or callback is running, but
> > + * waiting for us to release rq locks). In any case, when it
> > + * will file (or continue), it will see running_bw of this
>
> s/file/fire ?
Yep.
> > + * task migrated to later_rq (and correctly handle it).
>
> Is this because of dl_task_timer()->enqueue_task_dl()->task_contending()
> setting dl_se->dl_non_contending = 0 ?
No, this is related to inactive_task_timer() callback. Since the task is
migrated (by this function calling set_task_cpu()) because a CPU hotplug
operation happened, we need to reflect this w.r.t. running_bw, or
inactive_task_timer() might sub from the new CPU and cause running_bw to
underflow.
Thanks,
Juri
Powered by blists - more mailing lists