[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3a623696-a0f9-2246-420d-aee88d6acd75@redhat.com>
Date: Mon, 29 Aug 2022 10:33:50 +0200
From: Daniel Bristot de Oliveira <bristot@...hat.com>
To: Shang XiaoJing <shangxiaojing@...wei.com>, mingo@...hat.com,
peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next] sched/deadline: Save processing meaningless ops in
dl_task_offline_migration
On 8/27/22 04:04, Shang XiaoJing wrote:
> Task's bw will be sub from old rd, and add to new rd, even though
> find_lock_later_rq returns a new rq that belong to the same rd with old
> rq. Save ops for moving task's bw if rd is not changed.
This subject is not good. Please change it to a "meaningful" one, describing the
change, not its consequence.
-- Daniel
> Signed-off-by: Shang XiaoJing <shangxiaojing@...wei.com>
> ---
> kernel/sched/deadline.c | 28 +++++++++++++++-------------
> 1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 3bf4b12ec5b7..58ca9aaa9c44 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -714,20 +714,22 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
> add_rq_bw(&p->dl, &later_rq->dl);
> }
>
> - /*
> - * And we finally need to fixup root_domain(s) bandwidth accounting,
> - * since p is still hanging out in the old (now moved to default) root
> - * domain.
> - */
> - dl_b = &rq->rd->dl_bw;
> - raw_spin_lock(&dl_b->lock);
> - __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
> - raw_spin_unlock(&dl_b->lock);
> + if (&rq->rd != &later_rq->rd) {
> + /*
> + * And we finally need to fixup root_domain(s) bandwidth accounting,
> + * since p is still hanging out in the old (now moved to default) root
> + * domain.
> + */
> + dl_b = &rq->rd->dl_bw;
> + raw_spin_lock(&dl_b->lock);
> + __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
> + raw_spin_unlock(&dl_b->lock);
>
> - dl_b = &later_rq->rd->dl_bw;
> - raw_spin_lock(&dl_b->lock);
> - __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
> - raw_spin_unlock(&dl_b->lock);
> + dl_b = &later_rq->rd->dl_bw;
> + raw_spin_lock(&dl_b->lock);
> + __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
> + raw_spin_unlock(&dl_b->lock);
> + }
>
> set_task_cpu(p, later_rq->cpu);
> double_unlock_balance(later_rq, rq);
Powered by blists - more mailing lists