[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200607252127.09402.a1426z@gawab.com>
Date: Tue, 25 Jul 2006 21:27:09 +0300
From: Al Boldi <a1426z@...ab.com>
To: Arjan van de Ven <arjan@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: CFQ will be the new default IO scheduler - why?
Arjan van de Ven wrote:
> Al Boldi wrote:
> > Arjan van de Ven wrote:
> >>>> Should there be a default scheduler per filesystem? As some
> >>>> filesystems may perform better/worse with one over another?
> >>>
> >>> It's currently perDevice, and should probably be extended to perMount.
> >>
> >> Hi,
> >
> > Hi!
> >
> >> per mount is going to be "not funny". I assume the situation you are
> >> aiming for is the "3 partitions on a disk, each wants its own
> >> elevator". The way the kernel currently works is that IO requests the
> >> filesystem does are first flattened into an IO for the entire device
> >> (eg the partition mapping is done) and THEN the IO scheduler gets
> >> involved to schedule the IO on a per disk basis.
> >
> > IC. That probably explains why concurrent io-procs have such a hard
> > time getting through to the disk. They probably just hang in the
> > flatting phase, waiting for something to take care of their requests.
>
> flattening is just an addition in the cpu, that's just really boring and
> shouldn't be visible anywhere performance wise
Try this on 2.6 and 2.4 respectively:
# cat /dev/hda > /dev/null
< switch to another vt >
< login >
< start timing >
< wait for shell >
< stop timing >
< wait for dcache to be gobbled by cat and repeat login as necessary >
On my system 2.4.31 (2sec) is at least twice as fast as 2.6.17 (4-10sec)
depending on io-scheduler, with noop/deadline performing best, albeit a lot
of noise (scrubbing the disk), and anti/cfq performing worst, albeit quieter
(just hanging around).
Thanks!
--
Al
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists