lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 21 Apr 2009 20:10:56 -0700
From:	Nauman Rafique <nauman@...gle.com>
To:	Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc:	Vivek Goyal <vgoyal@...hat.com>, dpshah@...gle.com,
	lizf@...fujitsu.com, mikew@...gle.com, fchecconi@...il.com,
	paolo.valente@...more.it, jens.axboe@...cle.com,
	ryov@...inux.co.jp, fernando@...ellilink.co.jp,
	s-uchida@...jp.nec.com, taka@...inux.co.jp, arozansk@...hat.com,
	jmoyer@...hat.com, oz-kernel@...hat.com, dhaval@...ux.vnet.ibm.com,
	balbir@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, akpm@...ux-foundation.org,
	menage@...gle.com, peterz@...radead.org
Subject: Re: [RFC] IO Controller

On Tue, Apr 21, 2009 at 8:04 PM, Gui Jianfeng
<guijianfeng@...fujitsu.com> wrote:
> Vivek Goyal wrote:
>> On Fri, Apr 10, 2009 at 05:33:10PM +0800, Gui Jianfeng wrote:
>>> Vivek Goyal wrote:
>>>> Hi All,
>>>>
>>>> Here is another posting for IO controller patches. Last time I had posted
>>>> RFC patches for an IO controller which did bio control per cgroup.
>>>   Hi Vivek,
>>>
>>>   I got the following OOPS when testing, can't reproduce again :(
>>>
>>
>> Hi Gui,
>>
>> Thanks for the report. Will look into it and see if I can reproduce it.
>
>  Hi Vivek,
>
>  The following script can reproduce the bug in my box.
>
> #!/bin/sh
>
> mkdir /cgroup
> mount -t cgroup -o io io /cgroup
> mkdir /cgroup/test1
> mkdir /cgroup/test2
>
> echo cfq > /sys/block/sda/queue/scheduler
> echo 7 > /cgroup/test1/io.ioprio
> echo 1 > /cgroup/test2/io.ioprio
> echo 1 > /proc/sys/vm/drop_caches
> dd if=1000M.1 of=/dev/null &
> pid1=$!
> echo $pid1
> echo $pid1 > /cgroup/test1/tasks
> dd if=1000M.2 of=/dev/null
> pid2=$!
> echo $pid2
> echo $pid2 > /cgroup/test2/tasks
>
>
> rmdir /cgroup/test1
> rmdir /cgroup/test2
> umount /cgroup
> rmdir /cgroup

Yes, this bug happens when we move a task from a cgroup to another
one, and delete the cgroup. Since the actual move to the new cgroup is
performed in a delayed fashion, if the cgroup is removed before
another request from the task is seen (and the actual move is
performed) , it results in a hit on BUG_ON. I am working on a patch
that will solve this problem and a few others; basically it would do
ref counting  for io_group structure. I am having a few problems with
it at the moment; will post the patch as soon as I can get it to work.

>
> --
> Regards
> Gui Jianfeng
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ