[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e5e476b0911180820y5d99a81et6be7f6f94442d0d5@mail.gmail.com>
Date: Wed, 18 Nov 2009 17:20:12 +0100
From: Corrado Zoccolo <czoccolo@...il.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: "Alan D. Brunelle" <Alan.Brunelle@...com>,
linux-kernel@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [RFC] Block IO Controller V2 - some results
Hi Vivek,
On Wed, Nov 18, 2009 at 4:32 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> o Currently we wait on sync-noidle service tree so that sync-noidle type of
> workload does not get swamped by sync-idle or async type of workload. Don't
> do this idling if there are no sync-idle or async type of queues in the group
> and there are other groups to dispatch the requests from and user has decided
> not to wait on slow groups to achieve better throughput. (group_idle=0).
>
> This will make sure if some group is doing just random IO and does not
> have sufficient IO to keep the disk busy, we will move onto other groups to
> dispatch the requests from and utilize the storage better.
>
This group will be treated unfairly, if the other groups are doing
sequential I/O:
It will dispatch one request every 100ms (at best), and every 300ms at worst.
I can't see how this is any better than having a centralized service
tree for all sync-noidle queues.
Probably it is better to just say:
* if the user wants isolation (group_idle should be named
group_isolation), the no-idle queues go into the group no-idle tree,
and a proper idling is ensured
* if the user doesn't want isolation, but performance, then the
no-idle queues go into the root group no-idle tree, for which the end
of tree idle should be ensured. This won't affect the sync-idle
queues, for which group weighting will still work unaffected.
Corrado
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists