lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20160524084319.GH7917@esperanza> Date: Tue, 24 May 2016 11:43:19 +0300 From: Vladimir Davydov <vdavydov@...tuozzo.com> To: Michal Hocko <mhocko@...nel.org> CC: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>, <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org> Subject: Re: [PATCH] mm: memcontrol: fix possible css ref leak on oom On Mon, May 23, 2016 at 07:44:43PM +0200, Michal Hocko wrote: > On Mon 23-05-16 19:02:10, Vladimir Davydov wrote: > > mem_cgroup_oom may be invoked multiple times while a process is handling > > a page fault, in which case current->memcg_in_oom will be overwritten > > leaking the previously taken css reference. > > Have you seen this happening? I was under impression that the page fault > paths that have oom enabled will not retry allocations. filemap_fault will, for readahead. This is rather unlikely, just like the whole oom scenario, so I haven't faced this leak in production yet, although it's pretty easy to reproduce using a contrived test. However, even if this leak happened on my host, I would probably not notice, because currently we have no clear means of catching css leaks. I'm thinking about adding a file to debugfs containing brief information about all memory cgroups, including dead ones, so that we could at least see how many dead memory cgroups are dangling out there. > > > Signed-off-by: Vladimir Davydov <vdavydov@...tuozzo.com> > > That being said I do not have anything against the patch. It is a good > safety net I am just not sure this might happen right now and so the > patch is not stable candidate. > > After clarification > Acked-by: Michal Hocko <mhocko@...e.com> Thanks.
Powered by blists - more mailing lists