[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090708135721.GB24048@redhat.com>
Date: Wed, 8 Jul 2009 09:57:21 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc: linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org, dm-devel@...hat.com,
jens.axboe@...cle.com, nauman@...gle.com, dpshah@...gle.com,
lizf@...fujitsu.com, mikew@...gle.com, fchecconi@...il.com,
paolo.valente@...more.it, ryov@...inux.co.jp,
fernando@....ntt.co.jp, s-uchida@...jp.nec.com, taka@...inux.co.jp,
jmoyer@...hat.com, dhaval@...ux.vnet.ibm.com,
balbir@...ux.vnet.ibm.com, righi.andrea@...il.com,
m-ikeda@...jp.nec.com, jbaron@...hat.com, agk@...hat.com,
snitzer@...hat.com, akpm@...ux-foundation.org, peterz@...radead.org
Subject: Re: [PATCH 21/25] io-controller: Per cgroup request descriptor
support
On Wed, Jul 08, 2009 at 11:27:25AM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> ...
> > }
> > +#ifdef CONFIG_GROUP_IOSCHED
> > +static ssize_t queue_group_requests_show(struct request_queue *q, char *page)
> > +{
> > + return queue_var_show(q->nr_group_requests, (page));
> > +}
> > +
> > +static ssize_t
> > +queue_group_requests_store(struct request_queue *q, const char *page,
> > + size_t count)
> > +{
> > + unsigned long nr;
> > + int ret = queue_var_store(&nr, page, count);
> > + if (nr < BLKDEV_MIN_RQ)
> > + nr = BLKDEV_MIN_RQ;
> > +
> > + spin_lock_irq(q->queue_lock);
> > + q->nr_group_requests = nr;
> > + spin_unlock_irq(q->queue_lock);
> > + return ret;
> > +}
> > +#endif
>
> Hi Vivek,
>
> Do we need to update the congestion thresholds for allocated io groups?
>
Good catch Gui. Thanks. I will test the patch and queue up for next posting.
Vivek
> Signed-off-by: Gui Jianfeng <guijianfeng@...fujitsu.com>
> ---
> block/blk-sysfs.c | 15 +++++++++++++++
> 1 files changed, 15 insertions(+), 0 deletions(-)
>
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 577ed42..92b9f25 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -83,17 +83,32 @@ static ssize_t queue_group_requests_show(struct request_queue *q, char *page)
> return queue_var_show(q->nr_group_requests, (page));
> }
>
> +extern void elv_io_group_congestion_threshold(struct request_queue *q,
> + struct io_group *iog);
> +
> static ssize_t
> queue_group_requests_store(struct request_queue *q, const char *page,
> size_t count)
> {
> + struct hlist_node *n;
> + struct io_group *iog;
> + struct elv_fq_data *efqd;
> unsigned long nr;
> int ret = queue_var_store(&nr, page, count);
> +
> if (nr < BLKDEV_MIN_RQ)
> nr = BLKDEV_MIN_RQ;
>
> spin_lock_irq(q->queue_lock);
> +
> q->nr_group_requests = nr;
> +
> + efqd = &q->elevator->efqd;
> +
> + hlist_for_each_entry(iog, n, &efqd->group_list, elv_data_node) {
> + elv_io_group_congestion_threshold(q, iog);
> + }
> +
> spin_unlock_irq(q->queue_lock);
> return ret;
> }
> --
> 1.5.4.rc3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists