lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B1779CE.1050801@cn.fujitsu.com>
Date:	Thu, 03 Dec 2009 16:41:50 +0800
From:	Gui Jianfeng <guijianfeng@...fujitsu.com>
To:	Vivek Goyal <vgoyal@...hat.com>
CC:	linux-kernel@...r.kernel.org, jens.axboe@...cle.com,
	nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
	ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
	taka@...inux.co.jp, jmoyer@...hat.com, righi.andrea@...il.com,
	m-ikeda@...jp.nec.com, czoccolo@...il.com, Alan.Brunelle@...com
Subject: Re: Block IO Controller V4

Vivek Goyal wrote:
> On Wed, Dec 02, 2009 at 09:51:36AM +0800, Gui Jianfeng wrote:
>> Vivek Goyal wrote:
>>> Hi Jens,
>>>
>>> This is V4 of the Block IO controller patches on top of "for-2.6.33" branch
>>> of block tree.
>>>
>>> A consolidated patch can be found here:
>>>
>>> http://people.redhat.com/vgoyal/io-controller/blkio-controller/blkio-controller-v4.patch
>>>
>> Hi Vivek,
>>
>> It seems this version doesn't work very well for "direct(O_DIRECT) sequence read" mode.
>> For example, you can create group A and group B, then assign weight 100 to group A and
>> weight 400 to group B, and you run "direct sequence read" workload in group A and B 
>> simultaneously. Ideally, we should see 1:4 disk time differentiation for group A and B. 
>> But actually, I see almost 1:2 disk time differentiation for group A and B. I'm looking
>> into this issue.
>> BTW, V3 works well for this case.
> 
> Hi Gui,
> 
> In my testing of 8 fio jobs in 8 cgroups, direct sequential reads seems to
> be working fine.
> 
> http://lkml.org/lkml/2009/12/1/367
> 
> I suspect that in some case we choose not to idle on the group and it gets
> deleted from service tree hence we loose share. Can you have a look at
> blkio.dequeue files. If there are excessive deletions, that will signify
> that we are loosing share because we chose not to idle.
> 
> If yes, please also run blktrace to see in what cases we chose not to
> idle.
> 
> In V3, I had a stronger check to idle on the group if it is empty using
> wait_busy() function. In V4 I have removed that and trying to wait busy
> on a queue by extending its slice if it has consumed its allocated slice.

Hi Vivek,

I ckecked the blktrace output, it seems that io group was deleted all the time,
because we don't have group idle any more. I pulled the wait_busy code back to
V4, and retest it, problem seems disappeared.

So i suggest that we need to retain the wait_busy code.

Thanks,
Gui

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ