[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1490327582-4376-6-git-send-email-luca.abeni@santannapisa.it>
Date: Fri, 24 Mar 2017 04:52:58 +0100
From: luca abeni <luca.abeni@...tannapisa.it>
To: linux-kernel@...r.kernel.org
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@....com>,
Claudio Scordino <claudio@...dence.eu.com>,
Steven Rostedt <rostedt@...dmis.org>,
Tommaso Cucinotta <tommaso.cucinotta@...up.it>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Joel Fernandes <joelaf@...gle.com>,
Mathieu Poirier <mathieu.poirier@...aro.org>,
Luca Abeni <luca.abeni@...tannapisa.it>
Subject: [RFC v5 5/9] sched/deadline: do not reclaim the whole CPU bandwidth
From: Luca Abeni <luca.abeni@...tannapisa.it>
Original GRUB tends to reclaim 100% of the CPU time... And this
allows a CPU hog to starve non-deadline tasks.
To address this issue, allow the scheduler to reclaim only a
specified fraction of CPU time.
Signed-off-by: Luca Abeni <luca.abeni@...tannapisa.it>
Tested-by: Daniel Bristot de Oliveira <bristot@...hat.com>
---
kernel/sched/core.c | 6 ++++++
kernel/sched/deadline.c | 7 ++++++-
kernel/sched/sched.h | 6 ++++++
3 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 20c62e7..efa88eb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6716,6 +6716,12 @@ static void sched_dl_do_global(void)
raw_spin_unlock_irqrestore(&dl_b->lock, flags);
rcu_read_unlock_sched();
+ if (dl_b->bw == -1)
+ cpu_rq(cpu)->dl.deadline_bw_inv = 1 << 8;
+ else
+ cpu_rq(cpu)->dl.deadline_bw_inv =
+ to_ratio(global_rt_runtime(),
+ global_rt_period()) >> 12;
}
}
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 6035311..e964051 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -212,6 +212,11 @@ void init_dl_rq(struct dl_rq *dl_rq)
#else
init_dl_bw(&dl_rq->dl_bw);
#endif
+ if (global_rt_runtime() == RUNTIME_INF)
+ dl_rq->deadline_bw_inv = 1 << 8;
+ else
+ dl_rq->deadline_bw_inv =
+ to_ratio(global_rt_runtime(), global_rt_period()) >> 12;
}
#ifdef CONFIG_SMP
@@ -871,7 +876,7 @@ extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);
*/
u64 grub_reclaim(u64 delta, struct rq *rq)
{
- return (delta * rq->dl.running_bw) >> 20;
+ return (delta * rq->dl.running_bw * rq->dl.deadline_bw_inv) >> 20 >> 8;
}
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 57bb79b..141549b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -565,6 +565,12 @@ struct dl_rq {
* task blocks
*/
u64 running_bw;
+
+ /*
+ * Inverse of the fraction of CPU utilization that can be reclaimed
+ * by the GRUB algorithm.
+ */
+ u64 deadline_bw_inv;
};
#ifdef CONFIG_SMP
--
2.7.4
Powered by blists - more mailing lists