[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200509210214.408e847a@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Sat, 9 May 2020 21:02:14 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Zefan Li <lizefan@...wei.com>
Cc: Tejun Heo <tj@...nel.org>, David Miller <davem@...emloft.net>,
yangyingliang <yangyingliang@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
<huawei.libin@...wei.com>, <guofan5@...wei.com>,
<linux-kernel@...r.kernel.org>, <cgroups@...r.kernel.org>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: [PATCH v2] netprio_cgroup: Fix unlimited memory leak of v2
cgroups
On Fri, 8 May 2020 22:58:29 -0700 Jakub Kicinski wrote:
> On Sat, 9 May 2020 11:32:10 +0800 Zefan Li wrote:
> > If systemd is configured to use hybrid mode which enables the use of
> > both cgroup v1 and v2, systemd will create new cgroup on both the default
> > root (v2) and netprio_cgroup hierarchy (v1) for a new session and attach
> > task to the two cgroups. If the task does some network thing then the v2
> > cgroup can never be freed after the session exited.
> >
> > One of our machines ran into OOM due to this memory leak.
> >
> > In the scenario described above when sk_alloc() is called cgroup_sk_alloc()
> > thought it's in v2 mode, so it stores the cgroup pointer in sk->sk_cgrp_data
> > and increments the cgroup refcnt, but then sock_update_netprioidx() thought
> > it's in v1 mode, so it stores netprioidx value in sk->sk_cgrp_data, so the
> > cgroup refcnt will never be freed.
> >
> > Currently we do the mode switch when someone writes to the ifpriomap cgroup
> > control file. The easiest fix is to also do the switch when a task is attached
> > to a new cgroup.
> >
> > Fixes: bd1060a1d671("sock, cgroup: add sock->sk_cgroup")
>
> ^ space missing here
>
> > Reported-by: Yang Yingliang <yangyingliang@...wei.com>
> > Tested-by: Yang Yingliang <yangyingliang@...wei.com>
> > Signed-off-by: Zefan Li <lizefan@...wei.com>
Fixed up the commit message and applied, thank you.
Powered by blists - more mailing lists