[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120115224532.GD3174@redhat.com>
Date: Sun, 15 Jan 2012 17:45:32 -0500
From: Vivek Goyal <vgoyal@...hat.com>
To: Shaohua Li <shaohua.li@...el.com>
Cc: Dave Chinner <david@...morbit.com>, linux-kernel@...r.kernel.org,
axboe@...nel.dk, jmoyer@...hat.com
Subject: Re: [RFC 0/3]block: An IOPS based ioscheduler
On Mon, Jan 09, 2012 at 09:09:35AM +0800, Shaohua Li wrote:
[..]
> > You need to present raw numbers and give us some idea of how close
> > those numbers are to raw hardware capability for us to have any idea
> > what improvements these numbers actually demonstrate.
> Yes, your guess is right. The hardware has limitation. 12 SSD exceeds
> the jbod capability, for both throughput and IOPS, that's why only
> read/write mixed workload impacts. I'll use less SSD in later tests,
> which will demonstrate the performance better. I'll report both raw
> numbers and fiops/cfq numbers later.
If fiops number are better please explain why those numbers are better.
If you cut down on idling, it is obivious that you will get higher
throughput on these flash devices. CFQ does disable queue idling for
non rotational NCQ devices. If higher throughput is due to driving
deeper queue depths, then CFQ can do that too just by changing quantum
and disabling idling.
So I really don't understand that what are you doing fundamentally
different in FIOPS ioscheduler.
The only thing I can think of more accurate accounting per queue in
terms of number of IOs instead of time. Which can just serve to improve
fairness a bit for certain workloads. In practice, I think it might
not matter much.
Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists