[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141118022113.GA10673@kernel>
Date: Tue, 18 Nov 2014 10:21:13 +0800
From: Wanpeng Li <wanpeng.li@...ux.intel.com>
To: Juri Lelli <juri.lelli@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Kirill Tkhai <ktkhai@...allels.com>,
linux-kernel@...r.kernel.org,
Wanpeng Li <wanpeng.li@...ux.intel.com>
Subject: Re: [PATCH] sched/rt: fix rt runtime corrupt when dl refuse a
smaller bandwidth
Ping Juri, ;-)
On Thu, Nov 13, 2014 at 04:47:39PM +0800, Wanpeng Li wrote:
>Dl class will refuse the bandwidth being set to some value smaller
>than the currently allocated bandwidth in any of the root_domains
>through sched_rt_runtime_us and sched_rt_period_us. RT runtime will
>be set according to sched_rt_runtime_us before dl class verify if
>the new bandwidth is suitable in the case of !CONFIG_RT_GROUP_SCHED.
>
>However, rt runtime will be corrupt if dl refuse the new bandwidth
>since there is no undo to reset the rt runtime to the old value.
>
>This patch fix it by setting rt runtime after all kinds of rational
>checking in the case of !CONFIG_RT_GROUP_SCHED.
>
>Signed-off-by: Wanpeng Li <wanpeng.li@...ux.intel.com>
>---
> kernel/sched/core.c | 30 ++++++++++++++++--------------
> 1 file changed, 16 insertions(+), 14 deletions(-)
>
>diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>index 2e7578a..355dde3 100644
>--- a/kernel/sched/core.c
>+++ b/kernel/sched/core.c
>@@ -7795,20 +7795,7 @@ static int sched_rt_can_attach(struct task_group *tg, struct task_struct *tsk)
> #else /* !CONFIG_RT_GROUP_SCHED */
> static int sched_rt_global_constraints(void)
> {
>- unsigned long flags;
>- int i, ret = 0;
>-
>- raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
>- for_each_possible_cpu(i) {
>- struct rt_rq *rt_rq = &cpu_rq(i)->rt;
>-
>- raw_spin_lock(&rt_rq->rt_runtime_lock);
>- rt_rq->rt_runtime = global_rt_runtime();
>- raw_spin_unlock(&rt_rq->rt_runtime_lock);
>- }
>- raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
>-
>- return ret;
>+ return 0;
> }
> #endif /* CONFIG_RT_GROUP_SCHED */
>
>@@ -7890,6 +7877,21 @@ static int sched_rt_global_validate(void)
>
> static void sched_rt_do_global(void)
> {
>+#ifndef CONFIG_RT_GROUP_SCHED
>+ unsigned long flags;
>+ int i;
>+
>+ raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
>+ for_each_possible_cpu(i) {
>+ struct rt_rq *rt_rq = &cpu_rq(i)->rt;
>+
>+ raw_spin_lock(&rt_rq->rt_runtime_lock);
>+ rt_rq->rt_runtime = global_rt_runtime();
>+ raw_spin_unlock(&rt_rq->rt_runtime_lock);
>+ }
>+ raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
>+#endif
>+
> def_rt_bandwidth.rt_runtime = global_rt_runtime();
> def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
> }
>--
>1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists