[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090908191941.GF15974@redhat.com>
Date: Tue, 8 Sep 2009 15:19:41 -0400
From: Vivek Goyal <vgoyal@...hat.com>
To: Gui Jianfeng <guijianfeng@...fujitsu.com>
Cc: linux-kernel@...r.kernel.org, jens.axboe@...cle.com,
containers@...ts.linux-foundation.org, dm-devel@...hat.com,
nauman@...gle.com, dpshah@...gle.com, lizf@...fujitsu.com,
mikew@...gle.com, fchecconi@...il.com, paolo.valente@...more.it,
ryov@...inux.co.jp, fernando@....ntt.co.jp, s-uchida@...jp.nec.com,
taka@...inux.co.jp, jmoyer@...hat.com, dhaval@...ux.vnet.ibm.com,
balbir@...ux.vnet.ibm.com, righi.andrea@...il.com,
m-ikeda@...jp.nec.com, agk@...hat.com, akpm@...ux-foundation.org,
peterz@...radead.org, jmarchan@...hat.com,
torvalds@...ux-foundation.org, mingo@...e.hu, riel@...hat.com
Subject: Re: [RFC] IO scheduler based IO controller V9
On Mon, Sep 07, 2009 at 03:40:53PM +0800, Gui Jianfeng wrote:
> Hi Vivek,
>
> I happened to encount a bug when i test IO Controller V9.
> When there are three tasks to run concurrently in three group,
> that is, one is parent group, and other two tasks are running
> in two different child groups respectively to read or write
> files in some disk, say disk "hdb", The task may hang up, and
> other tasks which access into "hdb" will also hang up.
>
> The bug only happens when using AS io scheduler.
> The following scirpt can reproduce this bug in my box.
>
Hi Gui,
I tried reproducing this on my system and can't reproduce it. All the
three processes get killed and system does not hang.
Can you please dig deeper a bit into it.
- If whole system hangs or it is just IO to disk seems to be hung.
- Does io scheduler switch on the device work
- If the system is not hung, can you capture the blktrace on the device.
Trace might give some idea, what's happening.
Thanks
Vivek
> ===========
> #!/bin/sh
>
> mkdir /cgroup
> mount -t cgroup -o io,blkio io /cgroup
>
> echo anticipatory > /sys/block/hdb/queue/scheduler
>
> mkdir /cgroup/test1
> echo 100 > /cgroup/test1/io.weight
>
> mkdir /cgroup/test2
> echo 400 > /cgroup/test2/io.weight
>
> mkdir /cgroup/test2/test3
> echo 400 > /cgroup/test2/test3/io.weight
>
> mkdir /cgroup/test2/test4
> echo 400 > /cgroup/test2/test4/io.weight
>
> #./rwio -r -f /hdb2/2000M.3 &
> dd if=/hdb2/2000M.3 of=/dev/null &
> pid4=$!
> echo $pid4 > /cgroup/test2/test3/tasks
> echo "pid4: $pid4"
>
> #./rwio -r -f /hdb2/2000M.1 &
> dd if=/hdb2/2000M.1 of=/dev/null &
> pid1=$!
> echo $pid1 > /cgroup/test1/tasks
> echo "pid1 $pid1"
>
> #./rwio -r -f /hdb2/2000M.2 &
> dd if=/hdb2/2000M.2 of=/dev/null &
> pid2=$!
> echo $pid2 > /cgroup/test2/test4/tasks
> echo "pid2 $pid2"
>
> sleep 20
>
> for ((;1;))
> {
> ps -p $pid1 > /dev/null 2>&1
> if [ $? -ne 0 ]; then
> break
> fi
>
> kill -9 $pid1 > /dev/null 2>&1
> }
> for ((;1;))
> {
> ps -p $pid2 > /dev/null 2>&1
> if [ $? -ne 0 ]; then
> break
> fi
>
> kill -9 $pid2 > /dev/null 2>&1
> }
>
>
> kill -9 $pid4 > /dev/null 2>&1
>
> rmdir /cgroup/test2/test3
> rmdir /cgroup/test2/test4
> rmdir /cgroup/test2
> rmdir /cgroup/test1
>
> umount /cgroup
> rmdir /cgroup
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists