[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110124225253.GF9420@redhat.com>
Date: Mon, 24 Jan 2011 17:52:53 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc: Jens Axboe <axboe@...nel.dk>,
linux kernel mailing list <linux-kernel@...r.kernel.org>,
Corrado Zoccolo <czoccolo@...il.com>,
Chad Talbott <ctalbott@...gle.com>,
Nauman Rafique <nauman@...gle.com>,
Divyesh Shah <dpshah@...gle.com>, jmoyer@...hat.com,
Shaohua Li <shaohua.li@...el.com>
Subject: Re: [PATCH 5/6 v3] cfq-iosched: CFQ group hierarchical scheduling
and use_hierarchy interface
On Mon, Dec 27, 2010 at 04:51:14PM +0800, Gui Jianfeng wrote:
[..]
> -static struct cfq_group *cfq_get_next_cfqg(struct cfq_data *cfqd);
> -
> static struct cfq_rb_root *service_tree_for(struct cfq_group *cfqg,
> enum wl_prio_t prio,
> enum wl_type_t type)
> @@ -640,10 +646,19 @@ static inline unsigned cfq_group_get_avg_queues(struct cfq_data *cfqd,
> static inline unsigned
> cfq_group_slice(struct cfq_data *cfqd, struct cfq_group *cfqg)
> {
> - struct cfq_rb_root *st = &cfqd->grp_service_tree;
> struct cfq_entity *cfqe = &cfqg->cfqe;
> + struct cfq_rb_root *st = cfqe->service_tree;
> + int group_slice = cfq_target_latency;
> +
> + /* Calculate group slice in a hierarchical way */
> + do {
> + group_slice = group_slice * cfqe->weight / st->total_weight;
> + cfqe = cfqe->parent;
> + if (cfqe)
> + st = cfqe->service_tree;
> + } while (cfqe);
>
> - return cfq_target_latency * cfqe->weight / st->total_weight;
> + return group_slice;
> }
Gui, I think this is still not fully correct. In flat mode there was
only 1 service tree at top and all the groups were on that service tree
so st->total_weight worked fine.
But now with hierarchical mode, children group might be on one of the
sync-idle tree and there might be other queues on other service tree
in the parent group.
So we shall have to have a notion of total group weigt (and not just
service tree weight) to calculate this accurately I think.
Secondly, this logic does not take into account the ioprio or sync/async
to calculate the group share. I think for the time being we can keep
it simple and later look into it to refine it.
Also I want to have some integration/simplification of workload slice
a and cfqq slice calculation logic with group slice logic. I guess will take
that up later.
>
> static inline void
> @@ -666,7 +681,8 @@ cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq)
> /* scale low_slice according to IO priority
> * and sync vs async */
> unsigned low_slice =
> - min(slice, base_low_slice * slice / sync_slice);
> + min(slice, base_low_slice * slice /
> + sync_slice);
Why extra line?
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists