lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D27319A.3060806@kernel.dk>
Date:	Fri, 07 Jan 2011 16:30:34 +0100
From:	Jens Axboe <axboe@...nel.dk>
To:	Yuehai Xu <yuehaixu@...il.com>
CC:	linux-kernel@...r.kernel.org, cmm@...ibm.com, rwheeler@...hat.com,
	vgoyal@...hat.com, czoccolo@...il.com, yhxu@...ne.edu
Subject: Re: Who does determine the number of requests that can be serving
  simultaneously in a storage?

On 2011-01-07 14:23, Yuehai Xu wrote:
> On Fri, Jan 7, 2011 at 8:10 AM, Jens Axboe <axboe@...nel.dk> wrote:
>>
>> Please don't top-post, thanks.
> 
> I am really sorry for that.
> 
>>
>> On 2011-01-07 14:00, Yuehai Xu wrote:
>>> I add a tracepoint so that I can get nr_sorted and in_flight[0/1] of
>>> request_queue when request is completed, I consider nr_sorted as the
>>> number of pending requests and in_flight[0/1] represent the number
>>> serving in the storage. Does these two parameters stand for what I
>>> mean?
>>
>> nr_sorted is the number of requests that reside in the IO scheduler.
>> That means requests that are not on the dispatch list yet. in_flight is
>> the number that the driver is currently handling. So I think your
>> understanding is correct.
>>
>> If you look at where you added your trace point, there are already a
>> trace point right there. I would recommend that you use blktrace, and
>> then use btt to parse it. That will give you all sorts of queueing
>> information.
> 
> Yes, but I notice that the original traces can't get nr_sorted and
> in_flight[0/1] directly, so I just add few lines. The result is from
> the blktrace, and I use blkparse to analysis it, it should be the same
> as what you said about btt(I don't know it, sorry about that).

You don't need those values. btt can just look at dispatch and
completion events to get an exact queue depth number at any point in
time.

>>> The test benchmark I use is postmark which simulates the email server
>>> system, over 90% requests are small random write. The storage is Intel
>>> M SSD. Generally, I think the number of in_flight[0/1] should be much
>>> greater than 1, but the result shows that this value is almost 1 no
>>> matter what I/O scheduler(CFQ/DEADLINE/NOOP) or
>>> filesystem(EXT4/EXT3/BTRFS) it is. Is it normal?
>>
>> Depends, do you have more requests pending in the IO scheduler? I'm
>> assuming you already verified that NCQ is active and working for your
>> drive.
>>
> 
> Yes, the nr_sorted(num of pending requests) remains around 100. hdparm
> shows that the NCQ has been enabled.

I would double check that NCQ really is active, not just supported. For
instance, the controller needs to support it too. If you look at dmesg
from when it detects your drive, it should print the queue depth used.
Or you can check queue_depth in the sysfs scsi_device directory. It
should be 31 (32 in total, but one has to be reserved for error
handling) for NCQ enabled, or 1 if if isn't.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ