lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4AEF0144.3080505@s5r6.in-berlin.de>
Date:	Mon, 02 Nov 2009 16:56:52 +0100
From:	Stefan Richter <stefanr@...6.in-berlin.de>
To:	Jeff Moyer <jmoyer@...hat.com>
CC:	Zubin Dittia <zubin@...tri.com>, linux-kernel@...r.kernel.org
Subject: Re: SSD read latency negatively impacted by large writes (independent
 of  choice of I/O scheduler)

Jeff Moyer wrote:
> Zubin Dittia <zubin@...tri.com> writes:
[...]
>> about 30 seconds after the write (which is when the write is
>> actually written back to the device from buffer cache), I see a very
>> large spike in read latency: from 200 microseconds to 25 milliseconds.
>>  This seems to imply that the writes issued by the scheduler are not
>> being broken up into sufficiently small chunks with interspersed
>> reads; instead, the whole sequential write seems to be getting issued
>> while starving reads during that period.
[...]
>> Playing around with different I/O
>> schedulers and parameters doesn't seem to help at all.
[...]
> I haven't verified your findings, but if what you state is true, then
> you could try tuning max_sectors_kb for your device.  Making that
> smaller will decrease the total amount of I/O that can be queued in the
> device at any given time.  There's always a trade-off between bandwidth
> and latency, of course.

Maximum transfer size per request is indeed one factor; another one is
queue_depth.  With a deep queue, a read request between many write
requests will still be held up by many write requests queued up before
the read request.  (Once the scheduler issued the requests to the queue,
it can't reorder the requests any more --- only the disk's firmware
could reorder the requests if it is sophisticated enough and there are
no barriers in the mix.)

When transfer size and queue depth are set different from the default,
the various I/O schedulers should be tested again because then their
behaviors may vary more than before.
-- 
Stefan Richter
-=====-==--= =-== ---=-
http://arcgraph.de/sr/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ