lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110923132441.GA10289@redhat.com>
Date:	Fri, 23 Sep 2011 09:24:41 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Shaohua Li <shaohua.li@...el.com>
Cc:	Corrado Zoccolo <czoccolo@...il.com>,
	lkml <linux-kernel@...r.kernel.org>,
	Jens Axboe <jaxboe@...ionio.com>,
	Maxim Patlasov <maxim.patlasov@...il.com>
Subject: Re: [patch]cfq-iosched: delete deep seeky queue idle logic

On Wed, Sep 21, 2011 at 07:16:20PM +0800, Shaohua Li wrote:

[..]
> > Try a workload with one shallow seeky queue and one deep (16) one, on
> > a single spindle NCQ disk.
> > I think the behaviour when I submitted my patch was that both were
> > getting 100ms slice (if this is not happening, probably some
> > subsequent patch broke it).
> > If you remove idling, they will get disk time roughly in proportion
> > 16:1, i.e. pretty unfair.
> I thought you are talking about a workload with one thread depth 4, and
> the other thread depth 16. I did some tests here. In an old kernel,
> without the deep seeky idle logic, the threads have disk time in
> proportion 1:5. With it, they get almost equal disk time. SO this
> reaches your goal. In a latest kernel, w/wo the logic, there is no big
> difference (the 16 depth thread get about 5x more disk time). With the
> logic, the depth 4 thread gets equal disk time in first several slices.
> But after an idle expiration(mostly because current block plug hold
> requests in task list and didn't add them to elevator), the queue never
> gets detected as deep, because the queue dispatch request one by one.

When the plugged requests are flushed, then they will be added to elevator
and at that point of time queue should be marked as deep?

Anyway, what's wrong with the idea I suggested in other mail of expiring
a sync-noidle queue afer few reuqest dispatches so that it does not
starve other sync-noidle queues.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ