[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52FDFAC6.1080802@gmail.com>
Date: Fri, 14 Feb 2014 19:15:18 +0800
From: Li Zefan <lizf.kern@...il.com>
To: Tejun Heo <tj@...nel.org>
CC: Li Zefan <lizefan@...wei.com>, LKML <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>
Subject: Re: [PATCH] cgroup: fix top cgroup refcnt leak
于 2014年02月14日 17:36, Li Zefan 写道:
> If we mount the same cgroupfs in serveral mount points, and then
> umount all of them, kill_sb() will be called only once.
>
> Therefore it's wrong to increment top_cgroup's refcnt when we find
> an existing cgroup_root.
>
> Try:
> # mount -t cgroup -o cpuacct xxx /cgroup
> # mount -t cgroup -o cpuacct xxx /cgroup2
> # cat /proc/cgroups | grep cpuacct
> cpuacct 2 1 1
> # umount /cgroup
> # umount /cgroup2
> # cat /proc/cgroups | grep cpuacct
> cpuacct 2 1 1
>
> You'll see cgroupfs will never be freed.
>
> Also move this chunk of code upwards.
>
> Signed-off-by: Li Zefan <lizefan@...wei.com>
> ---
> kernel/cgroup.c | 32 ++++++++++++++++----------------
> 1 file changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index 37d94a2..5bfe738 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -1498,6 +1498,22 @@ retry:
> bool name_match = false;
>
> /*
> + * A root's lifetime is governed by its top cgroup. Zero
> + * ref indicate that the root is being destroyed. Wait for
> + * destruction to complete so that the subsystems are free.
> + * We can use wait_queue for the wait but this path is
> + * super cold. Let's just sleep for a bit and retry.
> + */
> + if (!atomic_read(&root->top_cgroup.refcnt)) {
oops, this fix is wrong. We call kernfs_mount() without cgroup locks and it
drops cgroup refcnt if failed.
I guess we need to bump the refcnt and then drop it after kernfs_mount().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists