[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D2710DC.9050908@kernel.dk>
Date: Fri, 07 Jan 2011 14:10:52 +0100
From: Jens Axboe <axboe@...nel.dk>
To: Yuehai Xu <yuehaixu@...il.com>
CC: linux-kernel@...r.kernel.org, cmm@...ibm.com, rwheeler@...hat.com,
vgoyal@...hat.com, czoccolo@...il.com, yhxu@...ne.edu
Subject: Re: Who does determine the number of requests that can be serving
simultaneously in a storage?
Please don't top-post, thanks.
On 2011-01-07 14:00, Yuehai Xu wrote:
> I add a tracepoint so that I can get nr_sorted and in_flight[0/1] of
> request_queue when request is completed, I consider nr_sorted as the
> number of pending requests and in_flight[0/1] represent the number
> serving in the storage. Does these two parameters stand for what I
> mean?
nr_sorted is the number of requests that reside in the IO scheduler.
That means requests that are not on the dispatch list yet. in_flight is
the number that the driver is currently handling. So I think your
understanding is correct.
If you look at where you added your trace point, there are already a
trace point right there. I would recommend that you use blktrace, and
then use btt to parse it. That will give you all sorts of queueing
information.
> The test benchmark I use is postmark which simulates the email server
> system, over 90% requests are small random write. The storage is Intel
> M SSD. Generally, I think the number of in_flight[0/1] should be much
> greater than 1, but the result shows that this value is almost 1 no
> matter what I/O scheduler(CFQ/DEADLINE/NOOP) or
> filesystem(EXT4/EXT3/BTRFS) it is. Is it normal?
Depends, do you have more requests pending in the IO scheduler? I'm
assuming you already verified that NCQ is active and working for your
drive.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists