[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110308183442.GL2868@balbir.in.ibm.com>
Date: Wed, 9 Mar 2011 00:04:42 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: Yong Zhang <yong.zhang0@...il.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <pzijlstr@...hat.com>,
Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
Bharata B Rao <bharata.rao@...ibm.com>
Subject: Re: [BUGFIX][PATCH] Fix sched rt group scheduling when hierachy is
enabled
* Yong Zhang <yong.zhang0@...il.com> [2011-03-08 16:42:00]:
> On Mon, Mar 7, 2011 at 3:00 PM, Yong Zhang <yong.zhang0@...il.com> wrote:
> >
> > I have tested with the attached(web mail will mangle it) patch with
> > yours applied. But I failed to trigger that WARNING.
> >
> > Below is my steps:
> > 1)mount -t cgroup -ocpu cpu /mnt
> > 2)mkdir /mnt/test-1
> > 3)mkdir /mnt/test-1-1
> > 4)set rt_runtime to 100000 for test-1 and test-1-1
> > 5)run a loop task and attach it to test-1-1
> >
> > So I thought out a scenario to satisfy your description,
> > but it's based on the unpatched(without your patch) kernel:
> > Let's assume a dual-core system with test-1/test-1-1
> > for rt group, a loop task is running on CPU 1 and test-1
> > and test-1-1 are both throttled.
> >
> > CPU-0 CPU-1
> > do_sched_rt_period_timer(test-1-1)
> > {
> > for CPU-1
> > unthrottled test-1-1.rt_rq[1];
> > but fail to enqueue it because
> > we alway get test-1-1.rt_se[0]
> > due to smp_processor_id();
> > thus test-1.rt_rq[1].nr_running == 0;
> > and it returned with run_time == 0;
> > }
> > do_sched_rt_period_timer(test-1)
> > unthrottle test-1.rt_rt[1] but
> > fail to enqueue test-1.rt_rt[1];
> > because nr_running == 0;
> >
> > So if we have your patch for issue-1, when
> > the hrtimer is running on CPU-1, test-1-1
> > and test-1 will be queued because that
> > additional check in run_timer == 0 case.
> >
> > But once we have your patch for issue-2, the above
> > problem will be killed by it. right?
>
> And another finding is that the top rt_rq could trigger your
> additional code, but we don't need to enqueue
> root_task_group.rt_se[].
>
> BTW, I update my patch(attached) to void testing on top rt_rq.
>
> Thanks,
> Yong
>
>
> --
> Only stand for myself
> diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
> index 01f75a5..b02b516 100644
> --- a/kernel/sched_rt.c
> +++ b/kernel/sched_rt.c
> @@ -568,8 +568,14 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
> raw_spin_unlock(&rt_rq->rt_runtime_lock);
> } else if (rt_rq->rt_nr_running) {
> idle = 0;
> - if (!rt_rq_throttled(rt_rq))
> + if (!rt_rq_throttled(rt_rq)) {
> + struct sched_rt_entity *rt_se;
> + int cpu = cpu_of(rq_of_rt_rq(rt_rq));
> +
> + rt_se = rt_rq->tg->rt_se[cpu];
> + WARN_ON(rt_se && !on_rt_rq(rt_se));
> enqueue = 1;
Fair enough, I think it is good to have the warning in there.
> + }
> }
>
> if (enqueue)
--
Three Cheers,
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists