lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:  <4550F457.5040801@tmr.com>
Date:	Tue, 07 Nov 2006 16:02:15 -0500
From:	Bill Davidsen <davidsen@....com>
To:	linux-kernel@...r.kernel.org
Cc:	linux-kernel@...r.kernel.org
Subject:  Re: async I/O seems to be blocking on 2.6.15

Chen, Kenneth W wrote:
> Jens Axboe wrote on Sunday, November 05, 2006 4:15 AM
>> On Fri, Nov 03 2006, Brent Baccala wrote:
>>> On Fri, 3 Nov 2006, Jens Axboe wrote:
>>>
>>>> Try to time it (visual output of the app is not very telling, and it's
>>>> buffered) and then apply some profiling.
>>> OK, a little more info.  I added gettimeofday() calls after each call
>>> to io_submit(), put the timevals in an array, and after everything was
>>> done computed the difference between each timeval and the program start
>>> time, as well as the deltas.  I got this:
>>>
>>> 0: 0.080s
>>> 1: 0.086s  0.006s
>>> 2: 0.102s  0.016s
>>> 3: 0.111s  0.008s
>>> 4: 0.118s  0.007s
>>> 5: 0.134s  0.015s
>>> 6: 0.141s  0.006s
>>> 7: 0.148s  0.006s
>>> 8: 0.158s  0.009s
>>> 9: 0.164s  0.006s
>>> ...
>>> 96: 1.036s  0.007s
>>> 97: 1.044s  0.007s
>>> 98: 1.147s  0.102s
>>> 99: 1.155s  0.008s
>>>
>>> 98 appears to be an aberration.  Perhaps three of the times on an
>>> average run are around a tenth of a second; all of the others are
>>> pretty steady at 7 or 8 microseconds.  So, it's basically linear in
>>> its time consumption.
>>>
>>> Does 7 microseconds seem a bit excessive for an io_submit (and a
>>> gettimeofday)?
>> I guess you mean miliseconds, not microseconds. 7 miliseconds seems way
>> too long. I repeated your test here, and the 100 submits take 97000
>> microseconds here - or 97 miliseconds. So that's a little less than 1
>> msec per io_submit. Still pretty big. You can experiment with oprofile
>> to profile where the kernel spends its time in that period.
> 
> 
> I've tried that myself too and see similar result.  One thing to note is
> that I/O being submitted are pretty big at 1MB, so the vector list inside
> bio is going to be pretty long and it will take a while to construct that.
> Drop the size for each I/O to something like 4KB will significantly reduce
> the time.  I haven't done the measurement whether the time to submit I/O
> grows linearly with respect to I/O size.  Most likely it will.  If it is
> not, then we might have a scaling problem (though I don't believe we have
> this problem).

But... I'm probably missing something, but submitting smaller i/o would 
mean more system calls, and presumably more total overhead. I assume 
they will be faster system calls, but if the kernel code is sorting and 
merging requests even that might not be true. Having user space break it 
into pieces and kernel space put them back together again isn't an 
obvious win in overhead or a solution to blocking.

That said, I admit that I rarely use AIO, since the problems I have 
where it would be useful are threaded already, and I can painlessly do 
the i/o in a thread and let it block.

Perhaps the use of -EAGAIN could solve this? As mentioned by ken, I think.

-- 
Bill Davidsen <davidsen@....com>
   Obscure bug of 2004: BASH BUFFER OVERFLOW - if bash is being run by a
normal user and is setuid root, with the "vi" line edit mode selected,
and the character set is "big5," an off-by-one errors occurs during
wildcard (glob) expansion.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ