[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1325746241.22361.503.camel@sli10-conroe>
Date: Thu, 05 Jan 2012 14:50:41 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: Dave Chinner <david@...morbit.com>
Cc: linux-kernel@...r.kernel.org, axboe@...nel.dk, vgoyal@...hat.com,
jmoyer@...hat.com
Subject: Re: [RFC 0/3]block: An IOPS based ioscheduler
On Wed, 2012-01-04 at 18:19 +1100, Dave Chinner wrote:
> On Wed, Jan 04, 2012 at 02:53:37PM +0800, Shaohua Li wrote:
> > An IOPS based I/O scheduler
> >
> > Flash based storage has some different characteristics against rotate disk.
> > 1. no I/O seek.
> > 2. read and write I/O cost usually is much different.
> > 3. Time which a request takes depends on request size.
> > 4. High throughput and IOPS, low latency.
> >
> > CFQ iosched does well for rotate disk, for example fair dispatching, idle
> > for sequential read. It also has optimization for flash based storage (for
> > item 1 above), but overall it's not designed for flash based storage. It's
> > a slice based algorithm. Since flash based storage request cost is very
> > low, and drive has big queue_depth is quite popular now which makes
> > dispatching cost even lower, CFQ's slice accounting (jiffy based)
> > doesn't work well. CFQ doesn't consider above item 2 & 3.
> >
> > FIOPS (Fair IOPS) ioscheduler is trying to fix the gaps. It's IOPS based, so
> > only targets for drive without I/O seek. It's quite similar like CFQ, but
> > the dispatch decision is made according to IOPS instead of slice.
> >
> > The algorithm is simple. Drive has a service tree, and each task lives in
> > the tree. The key into the tree is called vios (virtual I/O). Every request
> > has vios, which is calculated according to its ioprio, request size and so
> > on. Task's vios is the sum of vios of all requests it dispatches. FIOPS
> > always selects task with minimum vios in the service tree and let the task
> > dispatch request. The dispatched request's vios is then added to the task's
> > vios and the task is repositioned in the sevice tree.
> >
> > The series are orgnized as:
> > Patch 1: separate CFQ's io context management code. FIOPS will use it too.
> > Patch 2: The core FIOPS.
> > Patch 3: request read/write vios scale. This demontrates how the vios scale.
> >
> > To make the code simple for easy view, some scale code isn't included here,
> > some not implementated yet.
> >
> > TODO:
> > 1. ioprio support (have patch already)
> > 2. request size vios scale
> > 3. cgroup support
> > 4. tracing support
> > 5. automatically select default iosched according to QUEUE_FLAG_NONROT.
> >
> > Comments and suggestions are welcome!
>
> Benchmark results?
I didn't have data yet. The patches are still in earlier stage, I want
to focus on the basic idea first.
Thanks,
Shaohua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists