[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTinEaQtA-Kw-7CedvNyVL8w1ybDnvkjZZ8g8ORaN@mail.gmail.com>
Date: Thu, 20 May 2010 16:01:55 +0200
From: Corrado Zoccolo <czoccolo@...il.com>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: linux kernel mailing list <linux-kernel@...r.kernel.org>,
Jens Axboe <jens.axboe@...cle.com>,
Moyer Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [PATCH] cfq-iosched: Revert the logic of deep queues
On Thu, May 20, 2010 at 3:18 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> On Thu, May 20, 2010 at 01:51:49AM +0200, Corrado Zoccolo wrote:
> Hi Corrado,
>
> Deep queues can happen often on high end storage. One case I can think of is
> multiple kvm virt machines running and doing IO using AIO.
>
> I am not too keen on introducing a tunable at this point of time. Reason
> being that somebody having a SATA disk and driving deep queue depths is
> not very practical thing to do. At the same time we have fixed a theoritical
> problem in the past. If somebody really runs into the issue of deep queue
> starving other random IO, then we can fix it.
>
> Even if we have to fix it, I think instead of a tunable, a better solution
> would be to expire the deep queue after one round of dispatch (after
> having dispatched "quantum" number of requests from queue). That way no
> single sync-noidle queue will starve other queues and they will get to
> dispatch IO very nicely without intorducing any bottlenecks.
Can you implement this solution in the patch? It seems that this will
solve both the performance issue as well as non-reintroducing the
theoretical starvation problem.
If we don't mind some more tree operations, the queue could be expired
at every dispatch (if there are other queues in the service tree),
instead of every quantum dispatches, to cycle through all no-idle
queues more quickly and more fairly.
Thanks,
Corrado
>
> Thanks
> Vivek
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists