[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1227775382.7443.43.camel@sebastian.kern.oss.ntt.co.jp>
Date: Thu, 27 Nov 2008 17:43:02 +0900
From: Fernando Luis Vázquez Cao
<fernando@....ntt.co.jp>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Ryo Tsuruta <ryov@...inux.co.jp>, linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org,
virtualization@...ts.linux-foundation.org, jens.axboe@...cle.com,
taka@...inux.co.jp, righi.andrea@...il.com, s-uchida@...jp.nec.com,
balbir@...ux.vnet.ibm.com, akpm@...ux-foundation.org,
menage@...gle.com, ngupta@...gle.com, riel@...hat.com,
jmoyer@...hat.com, peterz@...radead.org, fchecconi@...il.com,
paolo.valente@...more.it
Subject: Re: [patch 0/4] [RFC] Another proportional weight IO controller
On Wed, 2008-11-26 at 11:08 -0500, Vivek Goyal wrote:
> > > > > What do you think about the solution at IO scheduler level (like BFQ) or
> > > > > may be little above that where one can try some code sharing among IO
> > > > > schedulers?
> > > >
> > > > I would like to support any type of block device even if I/Os issued
> > > > to the underlying device doesn't go through IO scheduler. Dm-ioband
> > > > can be made use of for the devices such as loop device.
> > > >
> > >
> > > What do you mean by that IO issued to underlying device does not go
> > > through IO scheduler? loop device will be associated with a file and
> > > IO will ultimately go to the IO scheduler which is serving those file
> > > blocks?
> >
> > How about if the files is on an NFS-mounted file system?
> >
>
> Interesting. So on the surface it looks like contention for disk but it
> is more the contention for network and contention for disk on NFS server.
>
> True that leaf node IO control will not help here as IO is not going to
> leaf node at all. We can make the situation better by doing resource
> control on network IO though.
On the client side NFS does not go through the block layer so no control
is possible there. As Vivek pointed out this could be tackled at the
network layer. Though I guess we could make do with a solution that
controls just the number of dirty pages (this would work for NFS writes
since the NFS superblock has a backing_device_info structure associated
with it).
> > > What's the use case scenario of doing IO control at loop device?
> > > Ultimately the resource contention will take place on actual underlying
> > > physical device where the file blocks are. Will doing the resource control
> > > there not solve the issue for you?
> >
> > I don't come up with any use case, but I would like to make the
> > resource controller more flexible. Actually, a certain block device
> > that I'm using does not use the I/O scheduler.
>
> Isn't it equivalent to using No-op? If yes, then it should not be an
> issue?
No, it is not equivalent. When using devices drivers that provide their
own make_request_fn() (check for devices that invoke
blk_queue_make_request() at initialization time) bios entering the block
layer can go directly to the device driver and from there to the device.
Regards,
Fernando
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists