lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 22 Apr 2009 09:23:07 -0400
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc:	nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
	mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
	jens.axboe@...cle.com, ryov@...inux.co.jp,
	fernando@...ellilink.co.jp, s-uchida@...jp.nec.com,
	taka@...inux.co.jp, arozansk@...hat.com, jmoyer@...hat.com,
	oz-kernel@...hat.com, dhaval@...ux.vnet.ibm.com,
	balbir@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, akpm@...ux-foundation.org,
	menage@...gle.com, peterz@...radead.org
Subject: Re: [RFC] IO Controller

On Wed, Apr 22, 2009 at 11:04:58AM +0800, Gui Jianfeng wrote:
> Vivek Goyal wrote:
> > On Fri, Apr 10, 2009 at 05:33:10PM +0800, Gui Jianfeng wrote:
> >> Vivek Goyal wrote:
> >>> Hi All,
> >>>
> >>> Here is another posting for IO controller patches. Last time I had posted
> >>> RFC patches for an IO controller which did bio control per cgroup.
> >>   Hi Vivek,
> >>
> >>   I got the following OOPS when testing, can't reproduce again :(
> >>
> > 
> > Hi Gui,
> > 
> > Thanks for the report. Will look into it and see if I can reproduce it.
> 
>   Hi Vivek,
> 
>   The following script can reproduce the bug in my box.
> 
> #!/bin/sh
> 
> mkdir /cgroup
> mount -t cgroup -o io io /cgroup
> mkdir /cgroup/test1
> mkdir /cgroup/test2
> 
> echo cfq > /sys/block/sda/queue/scheduler
> echo 7 > /cgroup/test1/io.ioprio
> echo 1 > /cgroup/test2/io.ioprio
> echo 1 > /proc/sys/vm/drop_caches
> dd if=1000M.1 of=/dev/null &
> pid1=$!
> echo $pid1
> echo $pid1 > /cgroup/test1/tasks
> dd if=1000M.2 of=/dev/null
> pid2=$!
> echo $pid2
> echo $pid2 > /cgroup/test2/tasks
> 
> 
> rmdir /cgroup/test1
> rmdir /cgroup/test2
> umount /cgroup
> rmdir /cgroup

Thanks Gui. We have got races with task movement and cgroup deletion. In
the original bfq patch, Fabio had implemented the logic to migrate the
task queue synchronously. It found the logic to be little complicated so I
changed it to delayed movement of queue from old cgroup to new cgroup.
Fabio later mentioned that it introduces a race where old cgroup is
deleted before task queue has actually moved to new cgroup.

Nauman is currently implementing reference counting for io groups and that
will solve this problem at the same time some other problems like movement
of queue to root group during cgroup deletion and which can potentially 
result in unfair share for some time to that queue etc.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ