[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1200136245.7999.20.camel@lappy>
Date: Sat, 12 Jan 2008 12:10:45 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: balbir@...ux.vnet.ibm.com
Cc: Valdis.Kletnieks@...edu, righiandr@...rs.sourceforge.net,
LKML <linux-kernel@...r.kernel.org>,
Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [RFC][PATCH] per-task I/O throttling
On Sat, 2008-01-12 at 16:27 +0530, Balbir Singh wrote:
> * Peter Zijlstra <a.p.zijlstra@...llo.nl> [2008-01-12 10:46:37]:
>
> >
> > On Fri, 2008-01-11 at 23:57 -0500, Valdis.Kletnieks@...edu wrote:
> > > On Fri, 11 Jan 2008 17:32:49 +0100, Andrea Righi said:
> > >
> > > > The interesting feature is that it allows to set a priority for each
> > > > process container, but AFAIK it doesn't allow to "partition" the
> > > > bandwidth between different containers (that would be a nice feature
> > > > IMHO). For example it would be great to be able to define per-container
> > > > limits, like assign 10MB/s for processes in container A, 30MB/s to
> > > > container B, 20MB/s to container C, etc.
> > >
> > > Has anybody considered allocating based on *seeks* rather than bytes moved,
> > > or counting seeks as "virtual bytes" for the purposes of accounting (if the
> > > disk can do 50mbytes/sec, and a seek takes 5millisecs, then count it as 100K
> > > of data)?
> >
> > I was considering a time scheduler, you can fill your time slot with
> > seeks or data, it might be what CFQ does, but I've never even read the
> > code.
> >
>
> So far the definition of I/O bandwidth has been w.r.t time. Not all IO
> devices have sectors; I'd prefer bytes over a period of time.
Doing a time based one would only require knowing the (avg) delay of
seeks, whereas doing a bytes based one would also require knowing the
(avg) speed of the device.
That is, if you're also interested in providing a latency guarantee.
Because that'd force you to convert bytes to time again.
I'm not sure thats a good way to go with as long as a majority of
devices still have a non-0 seek penalty (SSDs just aren't there yet for
most of us).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists