[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220829115904.28560-1-shangxiaojing@huawei.com>
Date: Mon, 29 Aug 2022 19:59:04 +0800
From: Shang XiaoJing <shangxiaojing@...wei.com>
To: <mingo@...hat.com>, <peterz@...radead.org>,
<juri.lelli@...hat.com>, <vincent.guittot@...aro.org>,
<dietmar.eggemann@....com>, <rostedt@...dmis.org>,
<bsegall@...gle.com>, <mgorman@...e.de>, <bristot@...hat.com>,
<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
CC: <shangxiaojing@...wei.com>
Subject: [PATCH v2] sched/deadline: Skip meaningless bw updates in dl_task_offline_migration
Skip meaningless bw updates on root domain if task still stay in same rd
while calling dl_task_offline_migration.
Signed-off-by: Shang XiaoJing <shangxiaojing@...wei.com>
---
changes in v2:
- fix subject and comment message
---
kernel/sched/deadline.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 8e14dc21d829..9660e166c8ec 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -714,20 +714,22 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
add_rq_bw(&p->dl, &later_rq->dl);
}
- /*
- * And we finally need to fixup root_domain(s) bandwidth accounting,
- * since p is still hanging out in the old (now moved to default) root
- * domain.
- */
- dl_b = &rq->rd->dl_bw;
- raw_spin_lock(&dl_b->lock);
- __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
- raw_spin_unlock(&dl_b->lock);
+ if (&rq->rd != &later_rq->rd) {
+ /*
+ * And we finally need to fixup root_domain(s) bandwidth accounting,
+ * since p is still hanging out in the old (now moved to default) root
+ * domain.
+ */
+ dl_b = &rq->rd->dl_bw;
+ raw_spin_lock(&dl_b->lock);
+ __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
+ raw_spin_unlock(&dl_b->lock);
- dl_b = &later_rq->rd->dl_bw;
- raw_spin_lock(&dl_b->lock);
- __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
- raw_spin_unlock(&dl_b->lock);
+ dl_b = &later_rq->rd->dl_bw;
+ raw_spin_lock(&dl_b->lock);
+ __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
+ raw_spin_unlock(&dl_b->lock);
+ }
set_task_cpu(p, later_rq->cpu);
double_unlock_balance(later_rq, rq);
--
2.17.1
Powered by blists - more mailing lists