[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080417102109.GZ12774@kernel.dk>
Date: Thu, 17 Apr 2008 12:21:10 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Aaron Carroll <aaronc@....unsw.edu.au>
Cc: Paolo Valente <paolo.valente@...more.it>,
Pavel Machek <pavel@....cz>, linux-kernel@...r.kernel.org
Subject: Re: [RESEND][RFC] BFQ I/O Scheduler
On Thu, Apr 17 2008, Aaron Carroll wrote:
> Jens Axboe wrote:
> >>Maybe there is also another middle-ground solution. I'll try to sketch
> >>it out:
> >>. use sectors instead of time
> >>. impose a penalty to each thread in proportion to the distance between
> >>its disk requests
> >>. reduce the maximum budget of each thread as a function of this seek
> >>penalty so as to prevent the thread from stealing more than a given time
> >>slice (the simple mechanism to limit per-thread budget is already
> >>implemented in bfq).
> >>
> >>By doing so, both fairness and time isolation should be guaranteed.
> >>Finally, this policy should be safe in that, given the maximum time used
> >>by a seeky thread to consume its maximum budget on a reference disk, the
> >>time used on any faster disk should be shorter.
> >>
> >>Does it seem reasonable?
> >
> >Not for CFQ, that will stay time based. The problem with #2 above is
> >that it then quickly turns to various heuristics, which is just
> >impossible to tune for general behaviour. Or it just falls apart for
> >other real life situations.
>
> Like SSD or hardware RAID. Time-slices have the nice property of fairness
> irrespective of the underlying hardware characteristics.
Exactly. We can cater to that somewhat by adding some simple hardware
profiles, so that the IO schedulers if seeks etc are costly or not. But
it's a good example none the less.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists