[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121217205317.GI7235@redhat.com>
Date: Mon, 17 Dec 2012 15:53:18 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: lizefan@...wei.com, axboe@...nel.dk,
containers@...ts.linux-foundation.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, ctalbott@...gle.com, rni@...gle.com
Subject: Re: [PATCH 07/12] cfq-iosched: implement hierarchy-ready cfq_group
charge scaling
On Fri, Dec 14, 2012 at 02:41:20PM -0800, Tejun Heo wrote:
> Currently, cfqg charges are scaled directly according to cfqg->weight.
> Regardless of the number of active cfqgs or the amount of active
> weights, a given weight value always scales charge the same way. This
> works fine as long as all cfqgs are treated equally regardless of
> their positions in the hierarchy, which is what cfq currently
> implements. It can't work in hierarchical settings because the
> interpretation of a given weight value depends on where the weight is
> located in the hierarchy.
I did not understand this. Why the current scheme will not work with
hierarchy?
While we calculate the vdisktime, this is calculated with the help
of CFQ_DEFAULT_WEIGHT and cfqg->weight. So we scale used time slice
in proportion to CFQ_DEFAULT_WEIGTH/cfqg->weight. So higher the weight
lesser the charge and cfqg gets scheduled again faster and lower the
weight, higher the vdisktime and cfqg gets scheduled less frequently.
As every cfqg does the same thing on service tree, they automatically
get fair share w.r.t their weight.
And this mechanism should not be impacted by the hierarchy because we
have a separate service tree at separate level. This will not work
only if you come up with one compressed tree and then weights will
have to be adjusted. If we have a separate service tree in each group
then it should work just fine.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists