lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5127DBEC.1000504@genband.com>
Date:	Fri, 22 Feb 2013 14:58:20 -0600
From:	Chris Friesen <chris.friesen@...band.com>
To:	Jan Engelhardt <jengelh@...i.de>
CC:	Martin Svec <martin.svec@...er.cz>,
	"Nicholas A. Bellinger" <nab@...ux-iscsi.org>,
	linux-scsi <linux-scsi@...r.kernel.org>,
	target-devel <target-devel@...r.kernel.org>,
	linux-kernel@...r.kernel.org
Subject: Re: Read I/O starvation with writeback RAID controller

On 02/22/2013 02:35 PM, Jan Engelhardt wrote:
>
> On Friday 2013-02-22 20:28, Martin Svec wrote:
>>
>> Yes, I've already tried the ROW scheduler. It helped for some low iodepths
>> depending on quantum settings but generally didn't solve the problem. I think
>> the key issue is that none of the schedulers can throttle I/O according to e.g.
>> average request roundtrip time. Shaohua Li is right here:
>> https://lkml.org/lkml/2012/12/11/598 -- as long as there's free room in
>> device's queue they blindly dispatch requests to it.
>>
>> Which is exactly what I see in deadline scheduler fifo queues: There're no read
>> requests to be scheduled between writes because all readers are starving. So
>> the scheduler keeps dispatching writes using all the remaining capacity of
>> device queue. Which in turn worses the read starvation. Bigger queue depth and
>> bigger writeback cache means higher chance for read starvation even from a
>> single writer.
>
> Sounds just like the bufferbloat problem in networking.
> Waiting for codel for the block layer  :)

Is there any way to somehow have the reads jump to the head of the queue 
in the disk controller?

Otherwise it seems like we might need to minimize the disk cache usage 
and do the scheduling in software.

This effectively mirrors what the codel people are doing with using tiny 
tx ring buffers to fight bufferbloat.  The difference is that with a NIC 
all you have to do is make sure the buffer doesn't empty and you get 
full speed whereas with a disk the more you stuff in the cache the 
better it can schedule things.

Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ