lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 24 Mar 2009 14:29:06 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Nauman Rafique <nauman@...gle.com>
Cc:	Gui Jianfeng <guijianfeng@...fujitsu.com>,
	Dhaval Giani <dhaval@...ux.vnet.ibm.com>, dpshah@...gle.com,
	lizf@...fujitsu.com, mikew@...gle.com, fchecconi@...il.com,
	paolo.valente@...more.it, jens.axboe@...cle.com,
	ryov@...inux.co.jp, fernando@...ellilink.co.jp,
	s-uchida@...jp.nec.com, taka@...inux.co.jp, arozansk@...hat.com,
	jmoyer@...hat.com, oz-kernel@...hat.com, balbir@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, akpm@...ux-foundation.org,
	menage@...gle.com, peterz@...radead.org
Subject: Re: [PATCH 01/10] Documentation

On Tue, Mar 24, 2009 at 11:14:13AM -0700, Nauman Rafique wrote:
> On Tue, Mar 24, 2009 at 5:58 AM, Vivek Goyal <vgoyal@...hat.com> wrote:
> > On Mon, Mar 23, 2009 at 10:32:41PM -0700, Nauman Rafique wrote:
> >
> > [..]
> >> > DESC
> >> > io-controller: idle for sometime on sync queue before expiring it
> >> > EDESC
> >> >
> >> > o When a sync queue expires, in many cases it might be empty and then
> >> > áit will be deleted from the active tree. This will lead to a scenario
> >> > áwhere out of two competing queues, only one is on the tree and when a
> >> > ánew queue is selected, vtime jump takes place and we don't see services
> >> > áprovided in proportion to weight.
> >> >
> >> > o In general this is a fundamental problem with fairness of sync queues
> >> > áwhere queues are not continuously backlogged. Looks like idling is
> >> > áonly solution to make sure such kind of queues can get some decent amount
> >> > áof disk bandwidth in the face of competion from continusouly backlogged
> >> > áqueues. But excessive idling has potential to reduce performance on SSD
> >> > áand disks with commnad queuing.
> >> >
> >> > o This patch experiments with waiting for next request to come before a
> >> > áqueue is expired after it has consumed its time slice. This can ensure
> >> > ámore accurate fairness numbers in some cases.
> >>
> >> Vivek, have you introduced this option just to play with it, or you
> >> are planning to make it a part of the patch set. Waiting for a new
> >> request to come before expiring time slice sounds problematic.
> >
> > Why are the issues you forsee with it. This is just an extra 8ms idling
> > on the sync queue that is also if think time of the queue is not high.
> >
> > We already do idling on sync queues. In this case we are doing an extra
> > idle even if queue has consumed its allocated quota. It helps me get
> > fairness numbers and I have put it under a tunable "fairness". So by
> > default this code will not kick in.
> >
> > Other possible option could be that when expiring a sync queue, don't
> > remove the queue immediately from the tree and remove it later if there
> > is no request from the queue in 8ms or so. I am not sure with BFQ, is it
> > feasible to do that without creating issues with current implementation.
> > Current implementation was simple, so I stick to it to begin with.
> 
> If the maximum wait is bounded by 8ms, then it should be fine. The
> comments on the patch did not talk about such limit; it sounded like
> unbounded wait to me.
> 
> Does keeping the sync queue in ready tree solves the problem too? Is
> it because it avoid a virtual time jump?
> 

I have not tried the second approch yet. But that also should solve the
vtime jump issue.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ