[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110214181027.GI13097@redhat.com>
Date: Mon, 14 Feb 2011 13:10:27 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc: Jens Axboe <axboe@...nel.dk>, Shaohua Li <shaohua.li@...el.com>,
lkml <linux-kernel@...r.kernel.org>,
Chad Talbott <ctalbott@...gle.com>,
Divyesh Shah <dpshah@...gle.com>
Subject: Re: [PATCH 5/6 v4] cfq-iosched: CFQ group hierarchical scheduling
and use_hierarchy interface
On Mon, Feb 14, 2011 at 11:20:33AM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > On Thu, Feb 10, 2011 at 03:47:45PM +0800, Gui Jianfeng wrote:
> >> CFQ group hierarchical scheduling and use_hierarchy interface.
> >>
> >
> > Hi Gui,
> >
> > I have done a quick high level review. Some minor comments inline.
> >
> > [..]
> >> struct cfq_data {
> >> struct request_queue *queue;
> >> - /* Root service tree for cfq_groups */
> >> - struct cfq_rb_root grp_service_tree;
> >> struct cfq_group root_group;
> >>
> >> + /* cfq group schedule in flat or hierarchy manner. */
> >> + bool use_hierarchy;
> >> +
> >
> > This seems to be redundant now? Nobody is using it?
> >
> >> /*
> >> * The priority currently being served
> >> */
> >> @@ -246,6 +251,9 @@ struct cfq_data {
> >> unsigned long workload_expires;
> >> struct cfq_group *serving_group;
> >>
> >> + /* Service tree for cfq group flat scheduling mode. */
> >> + struct cfq_rb_root grp_service_tree;
> >
> > Above comment is misleading. This service tree is now used both for
> > flat as well as hierarhical mode.
> >
> > [..]
> >> static void
> >> cfq_group_service_tree_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
> >> {
> >> - struct cfq_rb_root *st = &cfqd->grp_service_tree;
> >> struct cfq_entity *cfqe = &cfqg->cfqe;
> >> - struct cfq_entity *__cfqe;
> >> struct rb_node *n;
> >> + struct cfq_entity *entity;
> >> + struct cfq_rb_root *st;
> >> + struct cfq_group *__cfqg;
> >>
> >> cfqg->nr_cfqq++;
> >> +
> >> if (!RB_EMPTY_NODE(&cfqe->rb_node))
> >> return;
> >>
> >> /*
> >> - * Currently put the group at the end. Later implement something
> >> - * so that groups get lesser vtime based on their weights, so that
> >> - * if group does not loose all if it was not continously backlogged.
> >> + * Enqueue this group and its ancestors onto their service tree.
> >> */
> >> - n = rb_last(&st->rb);
> >> - if (n) {
> >> - __cfqe = rb_entry_entity(n);
> >> - cfqe->vdisktime = __cfqe->vdisktime + CFQ_IDLE_DELAY;
> >> - } else
> >> - cfqe->vdisktime = st->min_vdisktime;
> >> + while (cfqe) {
> >> + if (!RB_EMPTY_NODE(&cfqe->rb_node))
> >> + return;
> >>
> >> - cfq_entity_service_tree_add(st, cfqe);
> >> + /*
> >> + * Currently put the group at the end. Later implement
> >> + * something so that groups get lesser vtime based on
> >> + * their weights, so that if group does not loose all
> >> + * if it was not continously backlogged.
> >> + */
> >
> > Can we use vdisktime boost logic for groups also? I think it can be a separate
> > patch in the series (the last one). Keeping it as a separate patch will
> > also help you to coordinate with chad's patch.
> >
> >> + st = cfqe->service_tree;
> >
> > Group entity set their service tree when they get allocated and retain
> > this pointer even when they get deleted from serivce tree. Queue entities
> > seem to have it NULL when they get deleted from service tree and it
> > gets set again when queue is getting inserted. It would be nice if we
> > can fix this discrepancy and keep it consistent. I think clearing up
> > cfqe->service_tree is a better idea and then calculate it again for
> > group also.
>
> Vivek,
>
> Currently, cfq queue might change workload type and io class, so we need to recalculate
> its service_tree. But for cfq groups, IMHO we don't need to add this complexity for the
> time being.
> I think we can add this change as soon as different io classes or workload types are
> introduced. How do you think?
Ok, that's fine.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists