lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48692DC0.6060904@gmail.com>
Date:	Mon, 30 Jun 2008 14:02:24 -0500
From:	Roger Heflin <rogerheflin@...il.com>
To:	Martin Sustrik <sustrik@...tmq.com>
CC:	Martin Lucina <mato@...elna.sk>, linux-kernel@...r.kernel.org
Subject: Re: Higher than expected disk write(2) latency

Martin Sustrik wrote:
> Hi Roger,
> 
>>> If these figures are to be believed, then why are we seeing latencies of
>>> 8.3 msec?  Is this normal?  Or are we just being overly optimistic in
>>> our performance expectations?
>>
>> Consider this, 60/7200rpm=8.3ms for one rotation.
>>
>> You write sector n and n+1, it takes some amount of time for that 
>> first set of sectors to come under the head, when it does you write it 
>> and immediately return.   Immediately after that you attempt write 
>> sector n+2 and n+3 which just a bit ago passed under the head, so you 
>> have to wait an *ENTIRE* revolution for those sectors to again come 
>> under the head to be written, another ~8.3ms, and you continue to 
>> repeat this with each block being written.   If the sector was 
>> randomly placed in the rotation (ie 50% chance of the disk being off 
>> by 1/2 a rotation or less-you would have a 4.15 ms average seek time 
>> for your test)-but the case of sequential sync writes this leaves the 
>> sector about as far as possible from the head (it just passed under 
>> the head).
> 
> Fair enough. That exaplains the behaviour. Would AIO help here? If we 
> are able to enqueue next write before the first one is finished, it can 
> start writing it immediately without waiting for a revolution.

If you could get them queued at the disk level, things that would need to be 
watched were if the disk can queue things up (and all controllers/drivers 
support it), and how many things the disk can queue up, and how large each of 
those things can be, if they aren't queued at the disk, there is the chance that 
the machine cannot get the data to the disk faster enough for that next sector.

I have always avoided fully sync operations as things *ALWAYS* got really really 
slow because of all of the requirements need to make sure that it always got the 
data to disk correctly on a unexpected crash, and typically the type of 
applications I dealt with, if the machine crashed the currently outputting data 
was known to be incomplete and generally useless, so things were reran.

Depending on your application you could always get a small fast solid state 
device (no seek or RPM issues), and use it to keep a journal that could be 
replayed on an unexpected crash...and then just use various syncs to force 
things to disk at various points.

> 
>>> We also ran the same test on a different system with recent SAS disks
>>> connected via a HP/Compaq CCISS controller.  I don't have the exact
>>> details of the drives used, since I don't know how to get them out of
>>> the cciss driver, but the latencies we got were around 4 msec.  Whilst
>>> this is better than the "commodity" hardware used in the tests above, it
>>> still seems excessive.
>>
>> Almost the same case as for the 7200 rpm disk, but I bet these SAS 
>> drives are 15k drives?   If so 60/15000=4ms.
> 
> Bingo!

Note that in my experience the SAS drives do deal with more concurrently a lot 
better than the SATA drives, one would expect a SAS drive to scale about 2x 
better than a SATA drive (faster RPM) but the test results indicate that they 
were considerably better when hitting it with more concurrent streams that would 
be expected.

                              Roger
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ