lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 25 Sep 2011 09:34:16 +0200
From:	Corrado Zoccolo <czoccolo@...il.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Shaohua Li <shaohua.li@...el.com>,
	lkml <linux-kernel@...r.kernel.org>,
	Jens Axboe <jaxboe@...ionio.com>,
	Maxim Patlasov <maxim.patlasov@...il.com>
Subject: Re: [patch]cfq-iosched: delete deep seeky queue idle logic

On Fri, Sep 23, 2011 at 3:24 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> On Wed, Sep 21, 2011 at 07:16:20PM +0800, Shaohua Li wrote:
>
> [..]
>> > Try a workload with one shallow seeky queue and one deep (16) one, on
>> > a single spindle NCQ disk.
>> > I think the behaviour when I submitted my patch was that both were
>> > getting 100ms slice (if this is not happening, probably some
>> > subsequent patch broke it).
>> > If you remove idling, they will get disk time roughly in proportion
>> > 16:1, i.e. pretty unfair.
>> I thought you are talking about a workload with one thread depth 4, and
>> the other thread depth 16. I did some tests here. In an old kernel,
>> without the deep seeky idle logic, the threads have disk time in
>> proportion 1:5. With it, they get almost equal disk time. SO this
>> reaches your goal. In a latest kernel, w/wo the logic, there is no big
>> difference (the 16 depth thread get about 5x more disk time). With the
>> logic, the depth 4 thread gets equal disk time in first several slices.
>> But after an idle expiration(mostly because current block plug hold
>> requests in task list and didn't add them to elevator), the queue never
>> gets detected as deep, because the queue dispatch request one by one.
>
> When the plugged requests are flushed, then they will be added to elevator
> and at that point of time queue should be marked as deep?
>
> Anyway, what's wrong with the idea I suggested in other mail of expiring
> a sync-noidle queue afer few reuqest dispatches so that it does not
> starve other sync-noidle queues.
I don't know the current state of the code. Are the noidle queues
sorted in some tree, by sector number?
If that is the case, then even an expired queue could still be in
front of the tree.
>
> Thanks
> Vivek
>



-- 
__________________________________________________________________________

dott. Corrado Zoccolo                          mailto:czoccolo@...il.com
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
                               Tales of Power - C. Castaneda
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ