lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090402140037.GC12851@redhat.com>
Date:	Thu, 2 Apr 2009 10:00:37 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc:	nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
	mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
	jens.axboe@...cle.com, ryov@...inux.co.jp,
	fernando@...ellilink.co.jp, s-uchida@...jp.nec.com,
	taka@...inux.co.jp, arozansk@...hat.com, jmoyer@...hat.com,
	oz-kernel@...hat.com, dhaval@...ux.vnet.ibm.com,
	balbir@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, akpm@...ux-foundation.org,
	menage@...gle.com, peterz@...radead.org
Subject: Re: [RFC] IO Controller

On Thu, Apr 02, 2009 at 02:39:40PM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > Hi All,
> > 
> > Here is another posting for IO controller patches. Last time I had posted
> > RFC patches for an IO controller which did bio control per cgroup.
> > 
> > http://lkml.org/lkml/2008/11/6/227
> > 
> > One of the takeaway from the discussion in this thread was that let us
> > implement a common layer which contains the proportional weight scheduling
> > code which can be shared by all the IO schedulers.
> > 
>   
>   Hi Vivek,
> 
>   I did some tests on my *old* i386 box(with two concurrent dd running), and notice 
>   that IO Controller doesn't work fine in such situation. But it can work perfectly 
>   in my *new* x86 box. I dig into this problem, and i guess the major reason is that
>   my *old* i386 box is too slow, it can't ensure two running ioqs are always backlogged.

Hi Gui,

Have you run top to see what's the percentage cpu usage. I suspect that
cpu is not keeping up pace disk to enqueue enough requests. I think
process might be blocked somewhere else so that it could not issue
requests. 

>   If that is the case, I happens to have a thought. when an ioq uses up it time slice, 
>   we don't expire it immediately. May be we can give a piece of bonus time for idling 
>   to wait new requests if this ioq's finish time and its ancestor's finish time are all 
>   much smaller than other entities in each corresponding service tree.

Have you tried it with "fairness" enabled? With "fairness" enabled, for
sync queues I am waiting for one extra idle time slice "8ms" for queue
to get backlogged again before I move to the next queue?

Otherwise try to increase the idle time length to higher value say "12ms"
just to see if that has any impact.

Can you please also send me output of blkparse. It might give some idea
how IO schedulers see IO pattern.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ