[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1225988173.7803.4723.camel@twins>
Date: Thu, 06 Nov 2008 17:16:13 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org,
virtualization@...ts.linux-foundation.org, jens.axboe@...cle.com,
Hirokazu Takahashi <taka@...inux.co.jp>,
Ryo Tsuruta <ryov@...inux.co.jp>,
Andrea Righi <righi.andrea@...il.com>,
Satoshi UCHIDA <s-uchida@...jp.nec.com>,
fernando@....ntt.co.jp, balbir@...ux.vnet.ibm.com,
Andrew Morton <akpm@...ux-foundation.org>, menage@...gle.com,
ngupta@...gle.com, Rik van Riel <riel@...hat.com>,
Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [patch 0/4] [RFC] Another proportional weight IO controller
On Thu, 2008-11-06 at 11:01 -0500, Vivek Goyal wrote:
> > Does this still require I use dm, or does it also work on regular block
> > devices? Patch 4/4 isn't quite clear on this.
>
> No. You don't have to use dm. It will simply work on regular devices. We
> shall have to put few lines of code for it to work on devices which don't
> make use of standard __make_request() function and provide their own
> make_request function.
>
> Hence for example, I have put that few lines of code so that it can work
> with dm device. I shall have to do something similar for md too.
>
> Though, I am not very sure why do I need to do IO control on higher level
> devices. Will it be sufficient if we just control only bottom most
> physical block devices?
>
> Anyway, this approach should work at any level.
Nice, although I would think only doing the higher level devices makes
more sense than only doing the leafs.
Is there any reason we cannot merge this with the regular io-scheduler
interface? afaik the only problem with doing group scheduling in the
io-schedulers is the stacked devices issue.
Could we make the io-schedulers aware of this hierarchy?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists