[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <3da2c0120812020100u66a94f9bk3c0e8a705bb6347e@mail.gmail.com>
Date: Tue, 2 Dec 2008 14:30:14 +0530
From: "sudhir kumar" <imsudhirkumar@...il.com>
To: linux-kernel@...r.kernel.org
Cc: vatsa@...ux.vnet.ibm.com, bharata@...ux.vnet.ibm.com,
dhaval@...ux.vnet.ibm.com, balbir@...ux.vnet.ibm.com,
skumar@...ux.vnet.ibm.com, Peter <a.p.zijlstra@...llo.nl>,
"Paul Menage" <menage@...gle.com>, mingo@...e.hu
Subject: BUG: CGROUPS [2.6.28-rc6] Task migration does not clean reference to group
Hi,
I see an interesting bug in the cgroups. I found that cat /proc/cgroups
does not decrease the num_cgroups field when a group is deleted. It looks
to me that somewhere we miss to delete the reference, which leads to a failure
to unmount the cgroups filesystem.
The steps to produce the bug are:
mkdir /cpu
mount -t cgroup -ocpu cgroup /cpu
cd cpu
mkdir group1;
cd group1;
/./while & # an infinite loop
echo pid > tasks # pid is of infinite task
cd .. # we are now in root group
echo pid > /tasks # move the task to root group
rmdir group1; # Succeeds even when I was inside this group
in another shell
cd ..
umount /cpu; # this fails for me as below
[root@...ik-laptop /]# umount cpu/
umount: cpu/: device is busy
And I see that cat /proc/cgroups shows me still num_cgroups as 2;
I checked it on two kernels.
[root@...ik-laptop cpu]# uname -a
Linux malik-laptop.in.ibm.com 2.6.26 #1 SMP Tue Aug 12 16:39:18 IST
2008 i686 i686 i386 GNU/Linux
[root@...5b sudhir]# uname -a
Linux e325b.in.ibm.com 2.6.28-rc6 #1 SMP Mon Dec 1 15:53:09 IST 2008
i686 athlon i386 GNU/Linux
A look on /proc/cgroups
root@...5b sudhir]# cat /proc/cgroups
#subsys_name hierarchy num_cgroups enabled
#cpuset 0 1 1
#debug 0 1 1
#ns 0 1 1
#cpu 8 2 1
#cpuacct 0 1 1
#freezer 0 1 1
#
Please let me know if further information is required or I am doing
something wrong ?
Thanks
Sudhir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists