[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071119150126.GA24810@linux.vnet.ibm.com>
Date: Mon, 19 Nov 2007 20:31:26 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: dmitry.adamushko@...il.com, a.p.zijlstra@...llo.nl,
dhaval@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
efault@....de, skumar@...ux.vnet.ibm.com,
Balbir Singh <balbir@...ibm.com>
Subject: Re: [PATCH 1/2] sched: Minor cleanups
On Mon, Nov 19, 2007 at 02:08:03PM +0100, Ingo Molnar wrote:
> > #define for_each_leaf_cfs_rq(rq, cfs_rq) \
> > - list_for_each_entry(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
> > + list_for_each_entry_rcu(cfs_rq, &rq->leaf_cfs_rq_list, leaf_cfs_rq_list)
> >
> > /* Do the two (enqueued) entities belong to the same group ? */
> > static inline int
> > @@ -1126,7 +1126,10 @@ static void print_cfs_stats(struct seq_f
> > #ifdef CONFIG_FAIR_GROUP_SCHED
> > print_cfs_rq(m, cpu, &cpu_rq(cpu)->cfs);
> > #endif
> > +
> > + rcu_read_lock();
> > for_each_leaf_cfs_rq(cpu_rq(cpu), cfs_rq)
> > print_cfs_rq(m, cpu, cfs_rq);
> > + rcu_read_unlock();
>
> hm, why is this a cleanup?
Sorry for the wrong subject. It was supposed to include the above bug fix,
related to how we walk the task group list.
Thinking abt it now, I realize that print_cfs_rq() can potentially
sleep and hence it cannot be surrounded by rcu_read_lock()/unlock().
And as Dipankar just pointed me, sched_create/destroy_group aren't
serialized at all currently, so we need a mutex to protect them. The
same mutex can be then used when walking the list in print_cfs_stats() ..
Will send update patches soon ..
--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists