[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D1D4C3FF75F9354393DB8314DF43DEF2E7F7ED@xbl3.emulex.com>
Date: Tue, 12 Aug 2008 11:03:24 -0400
From: James.Smart@...lex.Com
To: <fernando@....ntt.co.jp>, <righi.andrea@...il.com>
Cc: <xen-devel@...ts.xensource.com>,
<containers@...ts.linux-foundation.org>,
<linux-kernel@...r.kernel.org>,
<virtualization@...ts.linux-foundation.org>, <taka@...inux.co.jp>,
<dm-devel@...hat.com>, <agk@...rceware.org>,
<baramsori72@...il.com>, <dave@...ux.vnet.ibm.com>,
<ngupta@...gle.com>, <balbir@...ux.vnet.ibm.com>
Subject: RE: RFC: I/O bandwidth controller
Fernando Luis Vázquez Cao wrote:
> > BTW as I said in a previous email, an interesting path to
> be explored
> > IMHO could be to think in terms of IO time. So, look at the
> time an IO
> > request is issued to the drive, look at the time the
> request is served,
> > evaluate the difference and charge the consumed IO time to the
> > appropriate cgroup. Then dispatch IO requests in function of the
> > consumed IO time debts / credits, using for example a token-bucket
> > strategy. And probably the best place to implement the IO time
> > accounting is the elevator.
> Please note that the seek time for a specific IO request is strongly
> correlated with the IO requests that preceded it, which means that the
> owner of that request is not the only one to blame if it
> takes too long
> to process it. In other words, with the algorithm you propose
> we may end
> up charging the wrong guy.
I assume all of these discussions are focused on simple storage - disks
direct attached to a single server - and are not targeted at SANs with
arrays, multi-initiator accesses, and fabric/network impacts. True ?
Such algorithms can be seriously off-base in these latter configurations.
-- james s
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists