lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49DAF54A.10909@cn.fujitsu.com>
Date:	Tue, 07 Apr 2009 14:40:10 +0800
From:	Gui Jianfeng <guijianfeng@...fujitsu.com>
To:	Vivek Goyal <vgoyal@...hat.com>
CC:	nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
	mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
	jens.axboe@...cle.com, ryov@...inux.co.jp,
	fernando@...ellilink.co.jp, s-uchida@...jp.nec.com,
	taka@...inux.co.jp, arozansk@...hat.com, jmoyer@...hat.com,
	oz-kernel@...hat.com, dhaval@...ux.vnet.ibm.com,
	balbir@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, akpm@...ux-foundation.org,
	menage@...gle.com, peterz@...radead.org
Subject: Re: [RFC] IO Controller

Gui Jianfeng wrote:
> Vivek Goyal wrote:
>> On Thu, Apr 02, 2009 at 02:39:40PM +0800, Gui Jianfeng wrote:
>>> Vivek Goyal wrote:
>>>> Hi All,
>>>>
>>>> Here is another posting for IO controller patches. Last time I had posted
>>>> RFC patches for an IO controller which did bio control per cgroup.
>>>>
>>>> http://lkml.org/lkml/2008/11/6/227
>>>>
>>>> One of the takeaway from the discussion in this thread was that let us
>>>> implement a common layer which contains the proportional weight scheduling
>>>> code which can be shared by all the IO schedulers.
>>>>
>>>   
>>>   Hi Vivek,
>>>
>>>   I did some tests on my *old* i386 box(with two concurrent dd running), and notice 
>>>   that IO Controller doesn't work fine in such situation. But it can work perfectly 
>>>   in my *new* x86 box. I dig into this problem, and i guess the major reason is that
>>>   my *old* i386 box is too slow, it can't ensure two running ioqs are always backlogged.
>> Hi Gui,
>>
>> Have you run top to see what's the percentage cpu usage. I suspect that
>> cpu is not keeping up pace disk to enqueue enough requests. I think
>> process might be blocked somewhere else so that it could not issue
>> requests. 
>>
>>>   If that is the case, I happens to have a thought. when an ioq uses up it time slice, 
>>>   we don't expire it immediately. May be we can give a piece of bonus time for idling 
>>>   to wait new requests if this ioq's finish time and its ancestor's finish time are all 
>>>   much smaller than other entities in each corresponding service tree.
>> Have you tried it with "fairness" enabled? With "fairness" enabled, for
>> sync queues I am waiting for one extra idle time slice "8ms" for queue
>> to get backlogged again before I move to the next queue?
>>
>> Otherwise try to increase the idle time length to higher value say "12ms"
>> just to see if that has any impact.
>>
>> Can you please also send me output of blkparse. It might give some idea
>> how IO schedulers see IO pattern.
> 
>   Hi Vivek,
> 
>   Sorry for the late reply, I tried the "fairness" patch, it seems not working.
>   I'v also tried to extend the idle value, not working either.
>   The blktrace output is attached. It seems that the high priority ioq is deleting
>   from busy tree too often due to lacking of requests. My box is single CPU and CPU
>   speed is a little slow. May be two concurrent dd is contending CPU to submit
>   requests, that's the reason for not always backlogged for ioqs.

  Hi Vivek,

  Sorry for bothering, there were some configure errors when i tested, and got the improper
  result.
  The "fairness" patch seems to work fine now! It makes the high priority ioq *always* backlogged :)

> 
>> Thanks
>> Vivek
>>
>>
>>
> 

-- 
Regards
Gui Jianfeng

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ