[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090914024401.GA14077@redhat.com>
Date: Sun, 13 Sep 2009 22:44:01 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc: jens.axboe@...cle.com, linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org, dm-devel@...hat.com,
nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
taka@...inux.co.jp, jmoyer@...hat.com, dhaval@...ux.vnet.ibm.com,
balbir@...ux.vnet.ibm.com, righi.andrea@...il.com,
m-ikeda@...jp.nec.com, agk@...hat.com, akpm@...ux-foundation.org,
peterz@...radead.org, jmarchan@...hat.com,
torvalds@...ux-foundation.org, mingo@...e.hu, riel@...hat.com
Subject: Re: [PATCH] io-controller: Fix task hanging when there are more
than one groups
On Fri, Sep 11, 2009 at 09:15:42AM +0800, Gui Jianfeng wrote:
[..]
> Hi Vivek, Jens,
>
> Currently, If there's only the root cgroup and no other child cgroup available, io-controller will
> optimize to stop expiring the current ioq, and we thought the current ioq belongs to root group. But
> in some cases, this assumption is not true. Consider the following scenario, if there is a child cgroup
> located in root cgroup, and task A is running in the child cgroup, and task A issues some IOs. Then we
> kill task A and remove the child cgroup, at this time, there is only root cgroup available. But the ioq
> is still under service, and from now on, this ioq won't expire because "only root" optimization.
> The following patch ensures the ioq do belongs to the root group if there's only root group existing.
>
> Signed-off-by: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cool. Good catch Gui. Queued for next posting.
Thanks
Vivek
> ---
> block/elevator-fq.c | 13 +++++++------
> 1 files changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/block/elevator-fq.c b/block/elevator-fq.c
> index b723c12..3f86552 100644
> --- a/block/elevator-fq.c
> +++ b/block/elevator-fq.c
> @@ -2338,9 +2338,10 @@ void elv_reset_request_ioq(struct request_queue *q, struct request *rq)
> }
> }
>
> -static inline int is_only_root_group(void)
> +static inline int is_only_root_group(struct elv_fq_data *efqd)
> {
> - if (list_empty(&io_root_cgroup.css.cgroup->children))
> + if (list_empty(&io_root_cgroup.css.cgroup->children) &&
> + efqd->busy_queues == 1 && efqd->root_group->ioq)
> return 1;
>
> return 0;
> @@ -2383,7 +2384,7 @@ static void io_free_root_group(struct elevator_queue *e)
> int elv_iog_should_idle(struct io_queue *ioq) { return 0; }
> EXPORT_SYMBOL(elv_iog_should_idle);
>
> -static inline int is_only_root_group(void)
> +static inline int is_only_root_group(struct elv_fq_data *efqd)
> {
> return 1;
> }
> @@ -2547,7 +2548,7 @@ elv_iosched_expire_ioq(struct request_queue *q, int slice_expired, int force)
> struct elevator_queue *e = q->elevator;
> struct io_queue *ioq = elv_active_ioq(q->elevator);
> int ret = 1;
> -
> +
> if (e->ops->elevator_expire_ioq_fn) {
> ret = e->ops->elevator_expire_ioq_fn(q, ioq->sched_queue,
> slice_expired, force);
> @@ -2969,7 +2970,7 @@ void *elv_select_ioq(struct request_queue *q, int force)
> * single queue ioschedulers (noop, deadline, AS).
> */
>
> - if (is_only_root_group() && elv_iosched_single_ioq(q->elevator))
> + if (is_only_root_group(efqd) && elv_iosched_single_ioq(q->elevator))
> goto keep_queue;
>
> /* We are waiting for this group to become busy before it expires.*/
> @@ -3180,7 +3181,7 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq)
> * unnecessary overhead.
> */
>
> - if (is_only_root_group() &&
> + if (is_only_root_group(ioq->efqd) &&
> elv_iosched_single_ioq(q->elevator)) {
> elv_log_ioq(efqd, ioq, "select: only root group,"
> " no expiry");
> --
> 1.5.4.rc3
>
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists