[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090908135352.GC15974@redhat.com>
Date: Tue, 8 Sep 2009 09:53:52 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc: linux-kernel@...r.kernel.org, jens.axboe@...cle.com,
containers@...ts.linux-foundation.org, dm-devel@...hat.com,
nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
taka@...inux.co.jp, jmoyer@...hat.com, dhaval@...ux.vnet.ibm.com,
balbir@...ux.vnet.ibm.com, righi.andrea@...il.com,
m-ikeda@...jp.nec.com, agk@...hat.com, akpm@...ux-foundation.org,
peterz@...radead.org, jmarchan@...hat.com,
torvalds@...ux-foundation.org, mingo@...e.hu, riel@...hat.com
Subject: Re: [RFC] IO scheduler based IO controller V9
On Mon, Sep 07, 2009 at 03:40:53PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
>
> I happened to encount a bug when i test IO Controller V9.
> When there are three tasks to run concurrently in three group,
> that is, one is parent group, and other two tasks are running
> in two different child groups respectively to read or write
> files in some disk, say disk "hdb", The task may hang up, and
> other tasks which access into "hdb" will also hang up.
>
> The bug only happens when using AS io scheduler.
> The following scirpt can reproduce this bug in my box.
>
Thanks for the testing it out Gui. I will run this test case on my machine
and see if I can reproduce this issue on my box and try to fix it.
Is your box completely hung or IO scheduler don't seem to be doing
anything. Can you try to swith the io scheduler to something else (after
it appears to be hung), and see if switch is successful and new scheduler
starts working?
Thanks
Vivek
> ===========
> #!/bin/sh
>
> mkdir /cgroup
> mount -t cgroup -o io,blkio io /cgroup
>
> echo anticipatory > /sys/block/hdb/queue/scheduler
>
> mkdir /cgroup/test1
> echo 100 > /cgroup/test1/io.weight
>
> mkdir /cgroup/test2
> echo 400 > /cgroup/test2/io.weight
>
> mkdir /cgroup/test2/test3
> echo 400 > /cgroup/test2/test3/io.weight
>
> mkdir /cgroup/test2/test4
> echo 400 > /cgroup/test2/test4/io.weight
>
> #./rwio -r -f /hdb2/2000M.3 &
> dd if=/hdb2/2000M.3 of=/dev/null &
> pid4=$!
> echo $pid4 > /cgroup/test2/test3/tasks
> echo "pid4: $pid4"
>
> #./rwio -r -f /hdb2/2000M.1 &
> dd if=/hdb2/2000M.1 of=/dev/null &
> pid1=$!
> echo $pid1 > /cgroup/test1/tasks
> echo "pid1 $pid1"
>
> #./rwio -r -f /hdb2/2000M.2 &
> dd if=/hdb2/2000M.2 of=/dev/null &
> pid2=$!
> echo $pid2 > /cgroup/test2/test4/tasks
> echo "pid2 $pid2"
>
> sleep 20
>
> for ((;1;))
> {
> ps -p $pid1 > /dev/null 2>&1
> if [ $? -ne 0 ]; then
> break
> fi
>
> kill -9 $pid1 > /dev/null 2>&1
> }
> for ((;1;))
> {
> ps -p $pid2 > /dev/null 2>&1
> if [ $? -ne 0 ]; then
> break
> fi
>
> kill -9 $pid2 > /dev/null 2>&1
> }
>
>
> kill -9 $pid4 > /dev/null 2>&1
>
> rmdir /cgroup/test2/test3
> rmdir /cgroup/test2/test4
> rmdir /cgroup/test2
> rmdir /cgroup/test1
>
> umount /cgroup
> rmdir /cgroup
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists