[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220827020430.29415-1-shangxiaojing@huawei.com>
Date: Sat, 27 Aug 2022 10:04:30 +0800
From: Shang XiaoJing <shangxiaojing@...wei.com>
To: <mingo@...hat.com>, <peterz@...radead.org>,
<juri.lelli@...hat.com>, <vincent.guittot@...aro.org>,
<dietmar.eggemann@....com>, <rostedt@...dmis.org>,
<bsegall@...gle.com>, <mgorman@...e.de>, <bristot@...hat.com>,
<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
CC: <shangxiaojing@...wei.com>
Subject: [PATCH -next] sched/deadline: Save processing meaningless ops in dl_task_offline_migration
Task's bw will be sub from old rd, and add to new rd, even though
find_lock_later_rq returns a new rq that belong to the same rd with old
rq. Save ops for moving task's bw if rd is not changed.
Signed-off-by: Shang XiaoJing <shangxiaojing@...wei.com>
---
kernel/sched/deadline.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3bf4b12ec5b7..58ca9aaa9c44 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -714,20 +714,22 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
add_rq_bw(&p->dl, &later_rq->dl);
}
- /*
- * And we finally need to fixup root_domain(s) bandwidth accounting,
- * since p is still hanging out in the old (now moved to default) root
- * domain.
- */
- dl_b = &rq->rd->dl_bw;
- raw_spin_lock(&dl_b->lock);
- __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
- raw_spin_unlock(&dl_b->lock);
+ if (&rq->rd != &later_rq->rd) {
+ /*
+ * And we finally need to fixup root_domain(s) bandwidth accounting,
+ * since p is still hanging out in the old (now moved to default) root
+ * domain.
+ */
+ dl_b = &rq->rd->dl_bw;
+ raw_spin_lock(&dl_b->lock);
+ __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
+ raw_spin_unlock(&dl_b->lock);
- dl_b = &later_rq->rd->dl_bw;
- raw_spin_lock(&dl_b->lock);
- __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
- raw_spin_unlock(&dl_b->lock);
+ dl_b = &later_rq->rd->dl_bw;
+ raw_spin_lock(&dl_b->lock);
+ __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
+ raw_spin_unlock(&dl_b->lock);
+ }
set_task_cpu(p, later_rq->cpu);
double_unlock_balance(later_rq, rq);
--
2.17.1
Powered by blists - more mailing lists